There are a few theories which help to explain the parlous state of the software industry in general, and of software research in particular. They point to problems which are widespread and affect other industries too, but software is particularly badly afflicted because of circumstances unique to it.
William Whyte, author of The Organization Man, seems to have been the first to recognize there was a problem. He noticed an overall decline in the quality of research and attributed this, ultimately, to the replacement of the Protestant Ethic (study hard, work hard, save for a rainy day, and you will do well) by the Social Ethic (be a good team player and your company will look after you -- a kind of corporate socialism, if you like). The problem with being a good team player is that it means you don't think differently from everyone else -- the very essence of research.
However, Whyte's thesis, while still generally valid, now looks somewhat dated. Companies expect their staff to toe the line and be loyal, but this loyalty is no longer reciprocated by them. What we have now is corporate fascism, not corporate socialism.
He's still right that being required to work as a team, at the expense of thinking as individuals, is a large part of the problem. But it's the element of coercion, not the teams themselves, that's the problem. Teams that grow spontaneously from individuals who complement each other usually work better than their members would in isolation.
This is the simplest of the three theories. It explains the lack of research, as research only yields fruit after many years. It also explains the selection of job applicants by increasingly narrow skill sets based on the latest fad technologies any of which could become obsolete in the medium term, and indeed it explains the fads themselves.
Because more and more people are using computers, computers had to be made easier to use. The command line had to be replaced with a "GUI" which was easy to use, but harder to implement. Further help was made available in the form of a certain paperclip. Software became more complex, which in turn required more programmers to implement it. And because more and more people were being paid to program computers, the ability of the average programmer dropped markedly, and those increasingly complex systems were badly implemented. To prevent them from screwing up too much, many systems were implemented in languages (like Java and XML) designed to make screwing up difficult. Although it might have made more sense to use more talented and productive programmers, this wasn't realized because management usually regards programmers as interchangeable widgets, without any significant differences in productivity.
Many software managers look with envy at other engineering activities, such as construction and manufacturing, because it is possible to predict quite accurately how long an activity will take, how much it will cost and how good the final product will be. They also see enormous improvements in the average productivity of workers brought about by division of labour. In many cases, they see the use of machines do do work which was formerly done by people. So they decide to apply project management techniques which work well in those industries to software development, but in the process make some fundamental misunderstandings.
Firstly, they separate design and coding. They compare the programmer's role to that of an assembly line worker, when they should be comparing programmers to the engineers who design the assembly line. They compare it to that of a construction worker, when they should be comparing programmers to architects. And they hope that one day, all this menial coding can be automated. It never occurs to them that it's the copy command that they should be comparing to the assembly line worker.
Secondly, they ignore an important difference: in most branches of engineering, engineers don't need to get values exactly right, as long as values in the final product are in the right range, so typically the test the product, by sampling. The design itself cannot be tested directly. In programming (and digital circuit design) perfection is mandatory but bug-free code is practically impossible to write. However, testing the design (i.e. the program) is possible, and the product (the copies) are guaranteed to be identical. This means that engineering quality control methods, designed to ensure that sampled values lie within a range at the production stage, cannot be transferred to software development.
Engineering envy is not a recent phenomenon. It is decades old and so cannot explain the current state of the industry.
The Reciprocality website is about a theory that most people are addicted to the by-product of their own boredom. At first glance that might seem far-fetched, but the argument is well-made. The starting point for Reciprocality is much less controversial: that there are two different modes of thinking, mapping and packing. Mapping involves building and maintaining a mental model of the world we experience, whereas packing just involves acquisition of facts and rules. Packing works well for repetitive tasks, but fails miserably for tasks requiring creativity. To be a mapper, you need to do a lot of reflective thinking. The packer mode of thinking became dominant after the introduction of agriculture. Packers are now in almost complete control and are making mappers' lives hell. There is one thing missing: why is this happening now? Maybe it's simply that it's taken this long for packer thinking, in the form of methodologies, certification, and so on, to completely permeate the realm of software development.
This page was linked to from
and was last updated on Jul 26 at 00:53.
© Copyright Donald Fisk 2004