Sunday, September 12, 2010

5 People Assigned to review a project

1 Doesn’t know the language
1 Reviewed the wrong modules
1 Didn’t complete the review
1 Reviewed it, but most of his comments were based on an incorrect assumption
1 Reviewed it, but has done several consecutive reviews, and only had a few comments


Sunday, September 05, 2010

An interesting memory allocation issue

I have spent the last couple of weeks finding and fixing an interesting memory allocation issue. The system I am working on is primarily written in PL1 with some 'C' linking it to the Unix OS. This works surprisingly well all things considered. We are working to cut it over to Java, but in the meantime we still have to enhance the current system. One of those enhancements in the current release is to enable multiple concurrent posters of our external interface software to share state information between them. The key point, is that these parallel processes now need to have shared memory where previously each parallel poster had its own independent memory. Because we had never used shared memory in this way before, this required coding up a portion of the solution in the 'C' interface layer between the PL1 and Unix OS. I did not code the initial solution, but did review it. When I reviewed it, I was concerned about the possibility of the C routine being called in parallel and returning incorrect information, and I was right as this was exactly the issue that occurred in the field!
The routine I was concerned about is passed a file identifier and an amount of memory to allocate. It returns a pointer to the memory segment that is shared between the parallel processes for that file and a flag indicating if this call created the shared memory segment or merely attached to the existing segment. By design, it is the responsibility of the process that initially created the segment to properly delete it when the process completes. (Under no circumstances would I have ever designed this routine this way, this is a yet another rodent from the stray cat. Given the requirements, I probably would have put a usage counter into the design, and destroyed the memory when the last process detached.) The code that had the bug is as follows (the following is sudo code to protect the guilty):
1 shmId = shmget(key, size, MemoryAccessFlag);
if (shmId == -1 )
2 shmId = shmget(key, size, IPC_CREAT | MemoryAccessFlag);
This code first tries to attach to an existing shared memory segment, and if that fails it creates the segment and sets the flag to TRUE indicating that it did so. So what was the bug? In short, the routine did not properly take into account the possibility of it being called in parallel simultaneously. To understand the issue, you have to know that if two processes call the shmget routine at the same time, the first will start processing and the second will block until the first completes. The shmget routine uses a semaphore internally to prevent multiple processes from running at the same time. For this code block, if the shared memory block does not exist for a given key, and two processes start this code block simultaneously the following happens:
1) The first process calls shmget at line 1 above and starts processing
2) The second process calls shmget at line 1 above and blocks.
3) The first process completes the shmget call and receives a -1 because the block doesn't exist.
4) The second process unblocks, starts executing, completes, and receives a -1 because the block still doesn't exist.
5) The first process calls shmget at line 2 to create the memory block (or attach to an existing memory block.) This occurs either before the second process completes its line 1 call or after, it doesn't matter. If the first process calls before the second completes its line 1 call, the first will block waiting for the second to complete.
6) The second process also calls shmget at line 2 to create the memory block or attach to an existing block. It now blocks again waiting for process 1 to create the memory block.
7) The first process completes its line 2 call with the memory block created. The createFlag is correctly set to TRUE.
8) The second process unblocks, and also gets the id for the same memory block. In this case the createFlag is incorrectly set to TRUE.
It took 4 full days to find this issue (nothing like starting a complicated bug hunt with 2 incorrect pieces of information,) including a half a day to build a test program to positively identify it, 2 hours to code the solution, 2 days to stress test it using the test program, and 3 more to peer it and get it incorporated into the baseline. (I spent 5 hours on the issue during the last 5 days of resolving it.) The solution is to rework the allocation process so that the create call only succeeds if the memory block does not already exist, then loop around to attempt the get of the existing block if the create failed.

Yet another rodent ...

So two weeks ago, I was assigned a urgent priority defect. This defect was in the external interface to our finance package. When it was being taken off-line, it would abnormally terminate resulting in a support call (it is not a user clearable issue.) For most of our clients taking this off-line is a weekly event, so this really was an urgent issue. After some digging around, I finally determined the root of the issue actually occurred when the software was being taken on-line. (I'll put up a separate post with the technical explanation.) The specific code was very familiar to me, I had peer reviewed it when it was first added to the system. My initial review was that the code had a potential timing issue, and should be re-worked to account for the potentiality of it being run in parallel. It turns out this defect was exactly that timing issue! When I went back to the summarized notes, it turns out the reason for not making the recommended change was stated as, "the process that utilizes this code runs serialized, not in parallel." Two weeks wasted to find, fix, validate, peer, QA test, and push out the the clients yet another rodent from the stray cat.