Wednesday, November 4, 2015

Procedural content generation: map construction via blind agent (a.k.a. building a Rogue-like map)

I guess this is a continuation of the previous post on procedural content generation. As before, I'm just following along and implementing the pseudocode from the Procedural Content in Games book (in chapter 3; link to the .pdf).

This time we're using an agent-based approach as opposed to the binary space partitioning (BSP) used last time. This implementation is a blind digger; it seems to generate more organic layouts, but it also more frequently generates nonsensical layouts relative to the BSP method. Maybe that would be good for a cave map or something of that sort.




I used Python again. Not having to implement a tree data structure this time around made it much faster to implement. The function that controls the overall flow is below, and you can find the entire script on my GitHub here. I'm pretty sure there are some bugs crawling around still, but it seems to work.


def generateAgentDiggerMap():
    mapHeight = 50
    mapWidth = 50
    digger = BlindDigger()
    
    diggingMap = DiggingMap(mapWidth, mapHeight)
    digger.initializeDig(diggingMap)
    
    while (diggingMap.percentAreaDug < 40):
        digger.performDigIteration(diggingMap)
    
    diggingMap.plotDiggingMap()

Both the Llama and I passed prelims! Click here for part 1.

Tuesday, November 3, 2015

Procedural content generation: map construction via binary space partitioning (a.k.a. building a Rogue-like map)

There are times that I question my market value and future job prospects as a life science Ph.D. (or whether I'll be able to finish the Ph.D. at all). When the Llama feels like this, she studies for Actuarial exams. I pursue something equally lucrative.

Video games.

These past few weeks have been spent preparing for our preliminary exams. The concomitant feelings of inadequacy led me to want to procedurally generate video game content. Seeing as I don't know the first thing about procedural content generation (PCG), I figured I'd make some maps. And make some maps I did.




Who wants to adventure through these bad boys!?

The method I used is based on binary space partitioning, and all I really did was implement the algorithm described by this book in chapter 3 (link to .pdf).

I implemented it in Python because I figured it'd be faster to prototype in than C++. I'm not sure that this was actually the case. I couldn't find a tree data structure and ended up implementing one (...and it's a bit of a mess). I also really wanted to declare a static variable to keep track of things during the recursive tree searches, but I don't think such a thing exists in Python; I end up passing lists around to get around this which seems pretty weird, but evidently lists are mutable in this case and booleans or integers are not? Anywho, it works for the most part. The function that controls the overall flow is below, and you can find the entire script on my GitHub here. I'm pretty sure there are some bugs crawling around still, but it seems to work.

def generateBSPMap():
    # 1: start with the entire area (root node of the BSP tree)
    rootNodeBox = Box((0, 0), 256, 256) #The dimensions should be evenly divisible by 2
    rootNode = AreaNode("root", defaultdict(AreaNode), rootNodeBox)
    tree = AreaTree(rootNode)
    firstPartitionNames = ("A", "B")
    # 2: divide the area along a horizontal or vertical line
    tree.partitionNode("root", firstPartitionNames)
    currentArea = rootNodeBox.getArea()
    currentPartitionNames = firstPartitionNames
    MAGICMINIMUMAREA = (0.03125) * 256 * 256
    #MAGICMINIMUMAREA = (0.10) * 256 * 256
    
    while (currentArea > MAGICMINIMUMAREA):
        # 3: select one of the two new partition cells
        chosenIndex = random.choice([0, 1])
        chosenPartition = currentPartitionNames[chosenIndex]    
        if (chosenIndex == 0):    
            otherPartition = currentPartitionNames[1]
        else:
            otherPartition = currentPartitionNames[0]
        
        #4: if this cell is bigger than the minimal acceptable size:
        print("Chosen partition " + chosenPartition + " has node area " + str(tree.getNodeArea(chosenPartition)))

        if (tree.getNodeArea(chosenPartition) > MAGICMINIMUMAREA):
            #5: go to step 2 (using this cell as the area to be divided)
            newPartitionNames = (chosenPartition + "_0", chosenPartition + "_1")
            tree.partitionNode(chosenPartition, newPartitionNames)
        
        #6: select the other partition cell, and go to step 4
        if (tree.getNodeArea(otherPartition) > MAGICMINIMUMAREA):
            newPartitionNames = (otherPartition + "_0", otherPartition + "_1")
            tree.partitionNode(otherPartition, newPartitionNames)
        
        currentArea = min([tree.getNodeArea(chosenPartition), tree.getNodeArea(otherPartition)])

        partitionNameList = []
        tree.getListOfLeafPairs(partitionNameList)
        currentPartitionNames = random.choice(partitionNameList)
        
    #7: for every partition cell:
    #8: create a room within the cell by randomly
    #   choosing two points (top left and bottom right)
    #   within its boundaries
    li_areasAreConnected = []
    terminationIterator = 0
    while (li_areasAreConnected == [False] or li_areasAreConnected == []):     
        tree.resetSubAreas()        
        tree.constructSubAreas()

        #9: starting from the lowest layers, draw corridors to connect
        #   rooms in the nodes of the BSP tree with children of the same
        #   parent
        #10:repeat 9 until the children of the root node are connected
        li_areasAreConnected = []
        tree.connectSubAreas(li_areasAreConnected)
        terminationIterator += 1
        if (terminationIterator > 50):
            print("Attempted too many iterations. Terminating.")
            print(li_areasAreConnected)
            break

    if (li_areasAreConnected == [True]):
        print(tree)
        tree.showAreaTree()

Run that thing and booooooooom.




I also used an agent-based approach in part 2 here.

Wednesday, July 8, 2015

Rotate 3D vector to same orientation as another 3D vector - Rodrigues' rotation formula

I had a normal that I wanted to orient in a particular way, so I wanted to find the rotation necessary to orient the normal. Help from the Llama and some Googling brought me to this post at Math StackExchange and the Rodrigues' Rotation Formula.

As for background, I had a point cloud that I wanted to align to the Z axis. Since the pot of the potted plant is a relatively constant feature in these images, I wanted to use the pot to orient the rest of the cloud.


I segmented a circle out to get the border of the pot. Here's the segmented circle, floating out in space.


In the context of the Stack Exchange post, I had a normal corresponding to a point in the center of the circle that I wanted to rotate to orient with the Z-axis (0, 0, 1). Between the Stack Exchange post, Wikipedia, and the Llama, I was able to write an implementation without actually knowing anything about linear algebra (see below). Everything appears to work just fine (the cyan circle is after transformation).



This implementation uses the Eigen library and the PCL library. The nomenclature is similar to the Stack Exchange post.

// This is an implementation of Rodrigues' Rotation Formula
// https://en.wikipedia.org/wiki/Rodrigues'_rotation_formula
// Following http://math.stackexchange.com/questions/293116/rotating-one-3-vector-to-another?rq=1
// Problem: Given two 3-vectors, A and B, find the rotation of A so that its orientation matches B.
// There are some edge cases where this implementation will fail, notably if the norm of the cross product = 0.

// Step 1: Find axis (X)
Eigen::Vector3f crossProduct = vector_A.cross(vector_B);
float crossProductNorm = crossProduct.norm();
Eigen::Vector3f vector_X = (crossProduct / crossProductNorm);

// Step 2: Find angle (theta)
float dotProduct = vector_A.dot(vector_B);
float norm_A = vector_A.norm();
float norm_B = vector_B.norm();
float dotProductOfNorms = norm_A * norm_B;
float dotProductDividedByDotProductOfNorms = (dotProduct / dotProductOfNorms);
float thetaAngleRad = acos(dotProductDividedByDotProductOfNorms);

// Step 3: Construct A, the skew-symmetric matrix corresponding to X
Eigen::Matrix3f matrix_A = Eigen::Matrix3f::Identity();

matrix_A(0,0) = 0.0;
matrix_A(0,1) = -1.0 * (vector_X(2));
matrix_A(0,2) = vector_X(1);
matrix_A(1,0) = vector_X(2);
matrix_A(1,1) = 0.0;
matrix_A(1,2) = -1.0 * (vector_X(0));
matrix_A(2,0) = -1.0 * (vector_X(1));
matrix_A(2,1) = vector_X(0);
matrix_A(2,2) = 0.0;

// Step 4: Plug and chug.
Eigen::Matrix3f IdentityMat = Eigen::Matrix3f::Identity();
Eigen::Matrix3f firstTerm = sin(thetaAngleRad) * matrix_A;
Eigen::Matrix3f secondTerm = (1.0 - cos(thetaAngleRad)) * matrix_A * matrix_A;

Eigen::Matrix3f matrix_R = IdentityMat + firstTerm + secondTerm;

// This is the rotation matrix. Finished with the Rodrigues' Rotation Formula implementation.
std::cout << "matrix_R" << std::endl << matrix_R << std::endl;


// We copy the rotation matrix into the matrix that will be used for the transformation.
Eigen::Matrix4f Transform = Eigen::Matrix4f::Identity();
Transform(0,0) = matrix_R(0,0);
Transform(0,1) = matrix_R(0,1);
Transform(0,2) = matrix_R(0,2);
Transform(1,0) = matrix_R(1,0);
Transform(1,1) = matrix_R(1,1);
Transform(1,2) = matrix_R(1,2);
Transform(2,0) = matrix_R(2,0);
Transform(2,1) = matrix_R(2,1);
Transform(2,2) = matrix_R(2,2);

// Now that we have the rotation matrix, we can use it to also find the translation to move the cloud to the origin.
// First, rotate a point of interest to the new location.
Eigen::Vector3f modelVectorAxisPointTransformed =  matrix_R * modelVectorAxisPoint;

// Add the translation to the matrix.
Transform(0,3) = modelVectorAxisPointTransformed(0) * (-1.0);
Transform(1,3) = modelVectorAxisPointTransformed(1) * (-1.0);
Transform(2,3) = modelVectorAxisPointTransformed(2) * (-1.0);

// Perform the transformation
pcl::transformPointCloud(*cloudPot, *cloudPotTransformed, Transform);

And finally, an image showing all the viewports.

Thursday, July 2, 2015

Desktop Screen Recording and Video Editing (Ubuntu 14.04)

I wanted to do a desktop screen recording to record my new sorghum leaf segmentation prototype in action. I started with gtkRecordMyDesktop, and it worked well; however it outputs only .ogv files. I was unable to find a video editor that was happy to take .ogv files, and after multiple failed attempts at converting the .ogv files to a different format, I turned to a different screen capture software: SimpleScreenRecorder. As per the instructions on their site, this is as simple as adding the repository and installing it:
sudo add-apt-repository ppa:maarten-baert/simplescreenrecorder
sudo apt-get update
sudo apt-get install simplescreenrecorder
simplescreenrecorder
SimpleScreenRecorder was very straight forward and seemed more versatile than gtkRecordMyDesktop. It also allowed writing to multiple file formats, including .mp4. Writing to an .mp4 allowed it to be edited in OpenShot.
sudo apt-get install openshot
openshot

Openshot worked very well as video editing tool. The end result is below.

video

Thursday, April 30, 2015

Find minimum oriented bounding box of point cloud (C++ and PCL)

Here we're trying to get the minimum oriented bounding box of a point cloud using C++ and the Point Cloud Library (PCL). Most of the code originates from user Nicola Fioraio on the PCL forums in this post.

Here is how user Nicola Fioraio describes the process:
    1) compute the centroid (c0, c1, c2) and the normalized covariance
    2) compute the eigenvectors e0, e1, e2. The reference system will be (e0, e1, e0 X e1) --- note: e0 X e1 = +/- e2
    3) move the points in that RF --- note: the transformation given by the rotation matrix (e0, e1, e0 X e1) & (c0, c1, c2) must be inverted
    4) compute the max, the min and the center of the diagonal
    5) given a box centered at the origin with size (max_pt.x - min_pt.x, max_pt.y - min_pt.y, max_pt.z - min_pt.z) the transformation you have to apply is Rotation = (e0, e1, e0 X e1) & Translation = Rotation * center_diag + (c0, c1, c2)

And here is my interpretation of the process along with the code. Here's the cloud we're starting with (it's a sorghum plant):

As I understand it, we first find the eigenvectors for the covariance matrix of the point cloud (i.e. principal component analysis, PCA). You'll replace the cloudSegmented pointer with a pointer to the cloud you want to find the oriented bounding box for.
// Compute principal directions
Eigen::Vector4f pcaCentroid;
pcl::compute3DCentroid(*cloudSegmented, pcaCentroid);
Eigen::Matrix3f covariance;
computeCovarianceMatrixNormalized(*cloudSegmented, pcaCentroid, covariance);
Eigen::SelfAdjointEigenSolver<Eigen::Matrix3f> eigen_solver(covariance, Eigen::ComputeEigenvectors);
Eigen::Matrix3f eigenVectorsPCA = eigen_solver.eigenvectors();
eigenVectorsPCA.col(2) = eigenVectorsPCA.col(0).cross(eigenVectorsPCA.col(1));  /// This line is necessary for proper orientation in some cases. The numbers come out the same without it, but
                                                                                ///    the signs are different and the box doesn't get correctly oriented in some cases.
/* // Note that getting the eigenvectors can also be obtained via the PCL PCA interface with something like:
pcl::PointCloud<pcl::PointXYZ>::Ptr cloudPCAprojection (new pcl::PointCloud<pcl::PointXYZ>);
pcl::PCA<pcl::PointXYZ> pca;
pca.setInputCloud(cloudSegmented);
pca.project(*cloudSegmented, *cloudPCAprojection);
std::cerr << std::endl << "EigenVectors: " << pca.getEigenVectors() << std::endl;
std::cerr << std::endl << "EigenValues: " << pca.getEigenValues() << std::endl;
// In this case, pca.getEigenVectors() gives similar eigenVectors to eigenVectorsPCA.
*/
These eigenvectors are used to transform the point cloud to the origin point (0, 0, 0) such that the eigenvectors correspond to the axes of the space. The minimum point, maximum point, and the middle of the diagonal between these two points are calculated for the transformed cloud (also referred to as the projected cloud when using PCL's PCA interface, or reference cloud by Nicola).
// Transform the original cloud to the origin where the principal components correspond to the axes.
Eigen::Matrix4f projectionTransform(Eigen::Matrix4f::Identity());
projectionTransform.block<3,3>(0,0) = eigenVectorsPCA.transpose();
projectionTransform.block<3,1>(0,3) = -1.f * (projectionTransform.block<3,3>(0,0) * pcaCentroid.head<3>());
pcl::PointCloud<pcl::PointXYZ>::Ptr cloudPointsProjected (new pcl::PointCloud<pcl::PointXYZ>);
pcl::transformPointCloud(*cloudSegmented, *cloudPointsProjected, projectionTransform);
// Get the minimum and maximum points of the transformed cloud.
pcl::PointXYZ minPoint, maxPoint;
pcl::getMinMax3D(*cloudPointsProjected, minPoint, maxPoint);
const Eigen::Vector3f meanDiagonal = 0.5f*(maxPoint.getVector3fMap() + minPoint.getVector3fMap());
This gives us something like this; the orange cloud is the transformed cloud (shown with an axis-aligned bounding box, but we want the oriented bounding box so we're not done yet).

Finally, the quaternion is calculated using the eigenvectors (which determines how the final box gets rotated), and the transform to put the box in correct location is calculated. The minimum and maximum points are used to determine the box width, height, and depth.
// Final transform
const Eigen::Quaternionf bboxQuaternion(eigenVectorsPCA); //Quaternions are a way to do rotations https://www.youtube.com/watch?v=mHVwd8gYLnI
const Eigen::Vector3f bboxTransform = eigenVectorsPCA * meanDiagonal + pcaCentroid.head<3>();

// This viewer has 4 windows, but is only showing images in one of them as written here.
pcl::visualization::PCLVisualizer *visu;
visu = new pcl::visualization::PCLVisualizer (argc, argv, "PlyViewer");
int mesh_vp_1, mesh_vp_2, mesh_vp_3, mesh_vp_4;
visu->createViewPort (0.0, 0.5, 0.5, 1.0,  mesh_vp_1);
visu->createViewPort (0.5, 0.5, 1.0, 1.0,  mesh_vp_2);
visu->createViewPort (0.0, 0, 0.5, 0.5,  mesh_vp_3);
visu->createViewPort (0.5, 0, 1.0, 0.5, mesh_vp_4);
visu->addPointCloud(cloudSegmented, ColorHandlerXYZ(cloudSegmented, 30, 144, 255), "bboxedCloud", mesh_vp_3);
visu->addCube(bboxTransform, bboxQuaternion, maxPoint.x - minPoint.x, maxPoint.y - minPoint.y, maxPoint.z - minPoint.z, "bbox", mesh_vp_3);
And the final product is the bounding box around the original blue cloud.

Monday, March 9, 2015

Postmortem: RIG: Recalibration and Interrelation of Genomic Sequence Data with the GATK

As mentioned in the previous post, the Llama and I recently got a workflow for using the GATK with a non-model organism published in G3: "RIG: Recalibration and Interrelation of Genomic Sequence Data with the GATK", and it's postmortem time (the code base can be found on GitHub here).

This manuscript was mostly a product of fortunate timing. Around early 2013, The Llama and I had been accepted as students in the lab we're currently in, and one of the things we were given to work on was converting restriction enzyme based reduced representation sequence data (a.k.a. Digital Genotyping or RAD-seq) to genotypes for the purpose of genetic map construction and QTL mapping. After iterating through a few ideas, we found that the software Stacks would be a good fit for our use case. We used Stacks for a little while, and its reference genome based pipeline worked fairly well for us.

Somewhere around mid to late 2013 we needed to do some variant calling from whole genome sequence data. Previously, CLC Genomics Workbench had been used by the lab for this application. However, the Llama and I go out of our way to avoid non-open source software, and we looked for alternatives. We caught wind of the GATK and its many blessings, and started to learn the ropes for whole genome sequence variant calling. For the time capsule, here's a forum post where I asked the GATK forum for advice in generating a variant set for BQSR/VQSR, and here's a forum post where I asked the Stacks forum for advice on generating a VCF file for use in recalibration with the GATK; that Stacks post was the nascent form of what ultimately evolved into the RIG workflow.

Since we had a reference genome, we transitioned away from Stacks and exclusively used GATK. This allowed us to use one tool that would produce standard file formats for calling variants from both our RR and WGS data; this made it much easier to identify the same variants from both types of data in downstream analyses. So, we started using the UnifiedGenotyper for large sample numbers of RR biparental family and association population analyses, and the HaplotypeCaller for the small number of WGS samples.

Then came the GATK 3.0 release which brought the power of the HaplotypeCaller to bear on as many samples as desired thanks to the new database of likelihoods model. By this time we had something like 6 full sequencing runs (48 lanes) of RR sequence data and were running into scaling problems trying to run the analyses on a local workstation. To get around this, we decided to invest some time porting our current analysis pipeline to use the GATK's Queue framework on a university high performance computing cluster.

For initial testing, we set up Sun Grid Engine on a single workstation (setting it as both a submission and execution host; is that weird?) to prototype pipelining in Scala with Queue and got our whole pipeline up and running beautifully on the single machine: Queue was happily Scatter-Gathering and HaplotypeCaller was churning through 100's of samples in reasonable timescales (although not fast enough that it would be sustainable to not move to a compute cluster). It was awesome.

Unfortunately it wasn't so awesome on the compute cluster. We ran into a problem where Queue didn't interact with the resource manager (Univa Grid Engine) nicely, and the job submissions slowed to a crawl over time:


...



After much consternation, some forum posts, and lots of things that didn't work, we eventually threw in the towel. While I suspect that figuring it out would have been instructive from a computing standpoint, there were data that needed to be analyzed. We figured it would be safer to do a rewrite in Bash that we knew would work rather than try to debug the interaction between our Queue pipeline and the cluster architecture, so we threw away the Scala code and rewrote the pipeline in Bash. Unfortunately, we had to recreate some of what made Queue so valuable, including logging and dependency tracking, but at least it was working on the cluster.

Around that time, we were designing the first iterations of what would become the RIG workflow. Back then it was internally called the LERRG workflow (Leveraging Existing Resources for Recalibration with the GATK). Say it out loud.

LERRG.

It sounds like something you might cough up in the morning, and it looked similarly awful:


The basic premise was to use variants called from RR data to inform VQSR of WGS data, and then to use the recalibrated WGS variants for BQSR of the WGS reads, followed by another round of VQSR after variant calling from the BQSRed WGS reads.

Once we had the idea and code in place, it was just a matter of execution. As expected, the workflow works pretty well; the GATK's methods are solid so it would have been surprising if it didn't work. It also worked just fine to recalibrate the RR reads and variants.

We started to kick around the idea of submitting the workflow as a manuscript since it looked like there was a market for it given that the GATK forum often has questions on how to use the GATK in non-model organisms (e.g. here and here), and the workflow was too complex to try to keep describing it in the methods sections of other papers from the lab. So, we wrote up the manuscript; during internal revisions it got renamed to RIG: Recalibration and Interrelation of genomic sequence data with the GATK. It also took considerable effort on the Llama's part to convince me that the diagram I had constructed for LERRG was, in fact, terrible; I doubt the manuscript would have ever been accepted had she not intervened and redesigned the thing from the ground up as we were working on the manuscript.

After a rapid reject from Molecular Plant, we targeted G3 since the review process there has been fair and reasonably fast in the past, and G3's readership seems to have a good number of communities that were in similar situations to ours: genomics labs working on organisms with a reference genome and a sizeable chunk of sequence data. The reviews we got back suggested testing the workflow in a better characterized system, and so we benchmarked it in Arabidopsis with good results.

What worked:
Timing: This was mostly a case of being in the right place at the right time. The GATK's 3.0 release occurred just as we were scaling up and in position to take full advantage of it, and our internal analysis pipeline hadn't been locked in so we were free to test out different options. The GATK had already been widely adopted by the human genomics community, and interest was growing in other communities in applying its benefits.

Prototyping, Refactoring, and Iteration: This was a relatively small scale project without many cooks in the kitchen, and many iterations of prototyping, testing, and redesigning or refactoring worked well. I doubt it would work well on a larger project, but the agility let us learn and improve things as we went.

What didn't work:
Development environment didn't represent deployment environment: We wasted a good chunk of time trying to debug why the Queue pipeline we wrote worked locally with Sun Grid Engine but not on the compute cluster with Univa Grid Engine. My lack of understanding of the internals of the resource manager and Queue prevented me from ever getting to the bottom of it, but we would have known about it much sooner if we had started developing and testing on the compute cluster where we would actually be running the analyses. Fortunately, we still learned something about resource managers by setting up SGE locally and learned a bit about Scala from the prototype.

Slow test cases: I still haven't generated any rapid test cases for testing and development of the workflow. Unfortunately, many of the problems arise mostly as a consequence of scale (e.g. keeping too many intermediate files consume hard drive space allocation before the pipeline finishes; file handle limit too small on the compute cluster; non-deterministic thread concurrency crashes with parallelized HaplotypeCaller). I haven't found a way to test for these without running an analysis of a full dataset, and that can require hours before the crash occurs; simple debugging takes days sometimes.

Friday, February 13, 2015

Postmortem: Resolution of Genetic Map Expansion Caused by Excess Heterozygosity in Plant Recombinant Inred Populations

The Llama and I just got a manuscript through review on the RIG workflow, and it's a good time for some postmortems. This postmortem is about Truong et al. (2014) Resolution of Genetic Map Expansion Caused by Excess Heterozygosity in Plant Recombinant Inbred Populations.

The story for this one starts when the Llama and I received about 6 lanes worth of sequence data for an experimental sorghum cross composed of 400+ individuals. We got the sequence data processed to genotypes using a early iteration of the RIG workflow (at that time it had a weirder more awesomer moniker: the LERRG workflow), and we were trying to construct a genetic map using the 10,000+ genetic variants that had been called using R/qtl. However, regardless of what we tried, we kept getting massive genetic maps that were well over 100% larger than what we expected. We even posted a question to the R/qtl forums.

As hashed out in that forum thread, it turns out that something generates what manifests as tight double crossovers in the genotype data. This phenomenon seems to occur with multiple types of genotyping technologies, so it's unclear as to whether these are biologically real or artifacts of the technology. Removing these tight double crossovers shrinks the map to more reasonable sizes, but we were still left with an unexpectedly large map. Nearing the end of our rope, we continued to scour the literature for causes of genetic map expansion. We came across a publication by Knox and Ellis (2002) titled "Excess Heterozygosity Contributes to Genetic Map Expansion in Pea Recombinant Inbred Populations"; our population also displayed excess levels of heterozygosity relative to a Mendelian model. While the Knox and Ellis paper didn't give conclusive reasonings as to why it caused genetic map expansion, it provided enough precedent to try to pursue it further.

The Llama went back to the old Haldane and Waddington models and some newer models on differential zygotic viability, and she cranked out a general solution for the genotypic frequencies expected for a recombinant inbred line population that hadn't yet gone to fixation and that had differential fitness of heterozygotes. I chucked it into the right spot in the R/qtl C code for genetic map estimation, and it worked out just fine. Or at least that's how we would have liked for it to have gone. The ride wasn't quite so smooth.

Getting things working involved a large number of false starts because of things like the general solution wasn't quite right, or the implementation wasn't quite right. It became somewhat of an obsession to make it work, and we'd spend hours after we got home at night testing, fixing, and rebuilding. We eventually got it worked out in a San Francisco airport on our way home from the Keystone Symposia on Big Data in Biology: we had a general solution and an implementation that worked for the test cases.

As predicted by Knox and Ellis, the excess heterozygosity did indeed expand the genetic map. However, it turned out that excess heterozygosity doesn't always expand the map; it depends on the generation interval and the amount of heterozygosity. Ultimately, the take home message is that tight double crossovers are a major source of genetic map expansion, and that segregation distortion can also lead to genetic map expansion.

We wrote the manuscript up, and after a couple of rapid rejects from PLOS Genetics and Genetics, G3: Genes|Genomes|Genetics sent it off for review. The G3 reviewers and editor were fair and informed. One reviewer suggested that we do a simulation study to demonstrate the method's efficacy, and, in hindsight, we're grateful for that suggestion.

What worked: 
Test cases - The only way we got the method working was that we finally made test cases with known output for the expected genotype probabilities given a generation interval and heterozygosity levels. Having this made it much easier to diagnose problems.
G3 - The associate editor and reviewers gave the manuscript a fair and complete review. They picked out reasonable weaknesses, and made useful suggestions that improved the paper.
R\qtl - This work almost certainly could not have been done if Karl Broman (and colleagues) did not make the R\qtl code base available on GitHub. R\qtl may not be the prettiest code base around, but it was sufficiently documented that we could find our way around and bolt our pieces onto it. R\qtl has been chugging along for more than 10 years, and we hope it chugs along for many more.

What didn't work:
Not making test cases earlier - We probably could have saved ourselves a few weeks of consternation had we initially set up test cases. Guess what we did in the beginning - We took the implementation, tossed the 10,000+ markers in, and estimated the whole map to see if the size changed. When that didn't work, we tried it for individual chromosomes. Then we tried it for subsets of individual chromosomes. Then we tried it for pairs of markers. It wasn't until around the time of the Big Data conference that we wised up and got to the root of the output: the genotype probabilities. Only then, when we knew the expected probabilities for a given generation interval and heterozygote advantage, were we able to properly debug the issue.