Principles of Software Flow


Are all Models Wrong?

One of my favourite quotes comes from the accidental statistician George E. P. Box

Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.

http://en.wikiquote.org/wiki/George_E._P._Box#An_Accidental_Statistician.2C_2010

It was interesting to find a blog article seeking to disprove this assertion.

Here is the crux of his argument:

Suppose Model A states, “X will occur with a probability that is greater than 0 or less than 1.” And let Model B state that “X will occur”, which of course is equivalent to “X will occur with probability 1″ (I’m using “probability 1″ in its plain-English, and not measure-theoretic, sense).

 

Now, if X does not occur, Model B has been proved false. The popular way to say it is that Model B has been falsified. If X does occur, Model B has been proved true. It has been truified, if you like.

 

How about Model A? No matter if X occurs or not, Model A has not been falsified or truified. It is impossible for Model A to be falsified or truified.

http://wmbriggs.com/blog/?p=1906

Now, I am not a statistician.  However, I reject this argument as wrong (although I accept it is probably true.)

The reason why is explained in the comments:

I had the good fortune to study with Dr. Box, and I’m afraid you’ve misconstrued [h]is aphorism. You have somehow managed to conflate “All models are wrong” with “All models are false” and then went on your merry way skewering your strawman.

 

I can assure you from first hand interaction with Dr. Box, that “All models are wrong” means simply, “All models have error”. In the silly example you state, the “unfalsifiable” Model A isn’t really even a model. 

ibid

However, while the argument is wrong, I feel it is still useful.

When Matt Briggs presents Model A

“X will occur with a probability that is greater than 0 or less than 1.” 

He is actually saying “we now that this model is wrong.”

If the probability were 0.5 he would be saying “Half the time this model is wrong, half the time it is right.”

He is still saying that the model is wrong.  He is just quantifying the number of times he expects it be prove wrong.

Which is exactly what George Box was talking about.  

how wrong do they have to be to not be useful.

Is being wrong half the time good enough to be useful?  How about sixty percent?

At this point the precision of the language does not suit the levels of uncertainty we are dealing with when people like myself, non statisticians, are talking.

Instead we are better off using less precise terms such as “unlikely”, “more often than not” or “rarely.”

Michael Jackson captures this distinction in his Problem Frame approach by distinguishing between formal and informal descriptions.

He explained the distinction back in 1998:

In any system of this kind it is important to recognise that the machine is completely formal, while the world is almost invariably mostly informal. The machine has been carefully constructed so that its fundamentally
informal physical nature has been tamed and brought under control.

http://www.jacksonworkbench.co.uk/stevefergspages/papers/jackson–a_discipline_of_description.pdf

In software the machines we build are always models and nothing more.  All models are wrong.  Software will always have bugs.

Why does any of this matter?

Because lately it has become fashionable to reject formal disciplines and practices that have proven useful simply because they can be proven to be wrong.  The response is to be become completely informal.

A good example of this is the NoEstimates movement.

Instead of depending on an accurate estimate for predictability we can take away the unknowns of cost and delivery date by making them… well, known. 

http://neilkillick.com/2013/01/31/noestimates-part-1-doing-scrum-without-estimates/

I’m not criticising Neil Killick here.  He makes a good argument.  The problem I have is with the people who have tried to use his “no estimates” aphorism as a principle or even, sometimes, as fact.

Are estimates always wrong?  Yes.

Are estimates sometimes useful?  Yes.

The problem is that we need to know just how wrong our estimate are if we are to know when they are useful.


What Does It Mean to Move On From Agile?

What do I mean when I say I’m moving on from Agile?

Am I abandoning the values and principles?

Of course  not, because these were my values and principle prior to the Agile Manifesto.

This way of thinking was around for decades before the term “Agile” was coined.  If you don’t believe me then read Brodie’s Thinking Forth (1984).

These principles were not a revelation received at Snowbird.  It was the common core values that all of the participants shared.

The Agile Manifesto was written in February of 2001, at a summit of seventeen independent-minded practitioners of several programming methodologies. The participants didn’t agree about much, but they found consensus around four main values.

http://www.agilealliance.org/the-alliance/the-agile-manifesto/

Let’s just put this conference into perspective.  17 developers got together, argued a lot and then finally came up with a core set of values.

I know that 17 developers actually managing to agree on something is a rare and precious thing, but please have a sense of perspective.  Please stop quoting the manifesto like it’s scripture.

Does it mean that I’ll stop using incremental development?

Of course not, because I was already using incremental development prior to the Agile Manifesto.  First I did RAD, then DSDM.  Since Arie van Bennekum happened to be there DSDM gets call itself Agile.

Is Glib’s EVO Project Management method Agile?  No, because he wasn’t there.

What if Ari van Bennekum had missed his flight?  Would DSDM still be considered Agile?  Is Agile just an accident of scheduling?  Yet this label, Agile, is used as if it has some deep, significant meaning.

For many years Agile served as a useful signal.  When I started on a new team I could be confident that the other members talking about Agile would share my values and perspective.  The term “Agile” served as a useful placeholder for those values and perspectives.  It was a perspective that valued new ideas and an open mind.

This is no longer the case.  Now I find myself constantly disagreeing with those who claim to be Agile.  I have come to associate the term “Agile” with a blinkered, prescriptive mindset.


Sorry Agile, I Need To Move On.

I’m sorry Agile, but it’s time for me to move on.

Moving On by Romy Mistra

I’m afraid you’re no longer the Agile I fell for.  Now you’re something else entirely.  I don’t like what you’ve become.

You used to have such high principles, but now all you want to do is set down rules.

Once when I looked at you I see alluring mystery.  You used to be so magical, so full of surprises and apparent contradictions.  Now all I see are to-do lists and time boxes.

We don’t like to listen to the same sounds.  I still adore the relational model.  Every time I listen I discover something new.  You just find it old fashioned.  You tell me I should be listening to NoSQL but it just sounds like a lot of noise to me.  I want to experiment and solve new problems but you seem satisfied with BDD.  Hearing those same three chords repeated over and over again is boring me.  I’m sick of you dismissing everything I find interesting as ‘too waterfall‘.

I hope we can stay friends.   It has been an interesting journey, but now I need to find a path less travelled.


Facts Not Opinions

The need for Experimentation

As aspiring Software Craftsmen we are always looking for way to raise the bar for professional software development. Practicing the crafting of high quality code that keeps adding value is essential. Working together in a community of professionals enables us to teach and learn from our shared experience.

However, if we really want to raise the bar of software craftsmanship, I believe we also need to perform experiments. Do you agree with me? If you don’t, perhaps it is because we have a different idea about what an experiment is.

If we look back in history we see the industrial revolution changing the world when a groups of craftsman came together to raise the bar for their various industries. Back then craftsmanship and engineering were the same thing. It is a time we have much to learn from them. Let’s consider one man, testing pioneer David Kirkaldy.

If you visit Kirkaldy’s Testing museum on Southwark High Street, just behind the Tate Modern, you’ll see David Kirkaldy’s motto carved above the door: “facts not options.” I think we have a lot to learn from David Kirkaldy. He performed experiments that replaced conflicting opinions with clear facts. He was made famous for his work related to the first Tay Bridge.

Bridge Building and Software Development

It is often said that writing software is not like building bridges, as Stack Overflow founders Jeff Atwood puts it: (original emphasis)

I find these discussions extremely frustrating, because I don’t think bridge building has anything in common with software development.* It’s a specious comparison. Software development is only like bridge building if you’re building a bridge on the planet Jupiter, out of newly invented materials, using construction equipment that didn’t exist five years ago.

http://www.codinghorror.com/blog/2005/05/bridges-software-engineering-and-god.html

We don’t have to go very far to find a time when bridge building was just like software development. It was just one hundred and fifty years ago that bridges were being built out of newly invented materials using construction equipment that didn’t exist a few years before.

When we think of timeless engineering we might envisage the Fowler and Baker’s Forth Bridge. Many might be surprised to discover that the famous design was not the first to be chosen. An earlier design by Thomas Bouch had been commissioned and the foundation had already been laid before the design was found to be inadequate.



Let us consider Thomas Bouch for a moment. He had an impressive reputation, having helped with the invention of train ferries and the construction of rail lines. His Tay Bridge had successfully passed three days of inspection to be declared safe for public traffic. In June 1879 Queen Victoria herself travelled across the bridge and knighted Bouch for his achievement. He appeared to have an impressive track record of successful projects despite some mishaps caused by poor engineering practices and sloppy shortcuts.

The general opinion was that Bouch was an excellent engineer. The facts, however, were quite different. In December 1879, the Tay Bridge collapsed with 60 lives lost. The official enquiry concluded that the bridge had been “badly designed, badly built and badly maintained, and that its downfall was due to inherent defects in the structure, which must sooner or later have brought it down.”

A badly designed bridge was poorly constructed and yet it managed to pass what appeared to be rigorous acceptance testing. Then it was poorly maintained before finally collapsing disastrously. Is it possible to imagine that bridge building back then was anything like software development is today?

So how did bridge building change? Is there anything that we can learn from the pioneers who brought about those changes? Can we follow in their footsteps so that the crafting of code might one day be held in the same regard as the building of bridges?

As Software Craftsmen we may be able to relate with surprising ease to David Kirkaldy and the way the Scottish engineer revolutionised testing with the invention of the Tensometer,

From Opinions to Facts

Alistair Cockburn describes the difference between bridge building and software development:

Civil engineers who design bridges are not supposed to invent new structures. Given a river and a projected traffic load, they are supposed to take soil samples and use code books to look for the simplest structure that handles the required load over the given distance, building on the soil at hand. They base their work on centuries of tabulation of known solutions.

Chapter 1 of Agile software development: the cooperative game

“Centuries” may be over stating the length of time these tabulations have been recorded for. For wrought-iron and steel it started with David Kirkaldy. When he started investigating the matter in 1862 he was surprised to discover very little was to be found:

It seems remarkable that whilst we have the results of many important and reliable experiments on Cast-iron, extremely few have been made, or at least published, on Wrought-iron, and almost none on Steel.

Results of an experimental inquiry into the comparitive tensile strength and other properties of verious kinds of wrought-iron and steel.

While very little hard experimental data was to be found on the subject, there was no shortage of opinions:

Although much has been written on the subject of wrought-iron and steel, yet, such is the great diversity of opinions held and stated by different individuals.

ibid

Kirkaldy’s solution to the problem was to provide experimental data.

It is hoped the results of these experiments, intended simply to elicit the truth, will be considered worthy of examination by those interested, and also at the same time prove of practical utility.

ibid

The same lack of facts and diversity of opinions has been observed in software In his book “Software Conflict” Robert L Glass observes:

In the processional literature we tend to see opinions presented as truth and advocacy presented as fact, with nothing acknowledging the tentative nature of some of these facts and much of this truth. Even noted computer scientist David Parnas has labeled much of our computer science truth “folklore,” because it has not been experimentally verified.

http://www.developerdotstar.com/books/software_conflict_glass.html

How can we progress past the diverse opinions of advocacy to a better understanding of the materials we work with? Is it only possible with large universities and generous grants? Is it within the practical reach of regular practitioners like ourselves?

Kirkaldy was a regular practitioner just like us. He built the tool needed and carried out his experiments as a personal side project:

At the time it was only intended to test a few specimens of each, but the investigation proved so interesting in itself and so likely to conduce to important practical results, that I was induced… to extend the experiments, as leisure and opportunity offered, very considerably beyond what had been originally contemplated.

Results of an experimental inquiry into the comparitive tensile strength and other properties of verious kinds of wrought-iron and steel.


His invention, the Tensometer, was the jUnit of its day, it wasn’t overly clever but it did it’s job well:

The apparatus employed was of the simplest construction, and proved during the experiment to work most satisfactory.

ibid

You can go and see it today, in the Kirkaldy’s Testing museum in London on Southwark High Street, just behind the Tate Modern. To our eyes it may look like a monster of a machine, but working at this scale was all in a day’s work for the Victorian engineer.

Is experimentation applicable to Software Craftsmanship? Certainly we do not work with iron and steel like the Victorian engineers, but that does not mean that experimentation is not applicable. These engineers were learning from the work of chemists who worked with liquid and gases. The materials were very different, but the principles of truth and rigour remained the same.

Practical vs Academic Experimentation

As craftsmen we are practical people. Do we really have time to experiment? Is it really a productive way to add value?

Today we think of experiments as the exclusive realm of the academic given to the pursuit of abstract goals, not the hard working practitioner. This was not always the case. For David Kirkaldy experimentation was essential if the conflicting complex theories and opinions were to be replaced with straightforward, simple facts.

The academics were there, too. They were usually wealthy members of the Royal Society. Being of independent means they were not bound by the need to earn a living. Sometimes there would be conflict between the two groups of experimenters. Consider, for example, the Safety Lamp, an important invention that was made, independently through careful experimentation, by two different people during the same year: 1815.


Sir Humphrey Davey invented the Davey Lamp. He was a knight of the realm, first baronet and a fellow of the royal society. He was already famous for his work on gases, such as the discovery of laughing gas (nitrous oxide). His lectures were well attended in fashionable London. No only did he invent a safety lamp, but he also progressed the scientific understanding of hydrogen.

It would be a couple decades before George Stephenson would become historically famous for building the first railways. He was an unknown engine-wright in the north of England responsible for maintaining and repairing the steam engines used at the collieries of Killingworth.

Stephenson was largely self educated and his experiments did not make him popular. However he made them with a practical purpose in mind. Samual Smiles relates:

For several years he had engaged, in his own rude way, in making experiments with the fire-damp in the Killingworth mine… One of the sinkers, observing him holding up lighted candles to the windward “blower” or fissure from which the inflammable gas escaped, entreated him to desist; but Stephenson’s answer was, that “he was busy with a plan which he hoped to make his experiments useful for preserving men’s lives.” On these occasions the miners usually got out of the way before he lit the gas.

Lives of the Engineeers


Stephenson’s work did prove useful. His theory was flawed and his risk management was poor: The experiments with his prototype lamps involved carrying it into a pit known to be full of explosive gasses. He did not advance scientific theory, but he did create a working safety lamp. Some controversy followed regarding who should take credit for the invention.

While Davey was clearly the better scientist, the question remains regarding whose experiments yielded the best results. In normal conditions both lamps performed equally well. However, in some exceptional circumstances there was a very important difference. Davey’s would burn red hot and potentially cause an explosion while Stephenson’s would safely go out:

A sudden outburst of gas took place… Upon this occasion, the whole of the Stephenson’s lamps, over a space of five hundred yards, were extinguished almost instantaneously; whereas the Davy lamps filled with fire and became red-hot… Had a strong current of air been blowing through the gallery at the time, an explosion would most probably taken place.

ibid

The lamps were put to rigorous testing by Dr Pereira for the Committee on Accidents in Mines. While both lamps had their faults, the conclusion was that “when exposed to a current of explosive gas the Davy lamp is ‘decidedly unsafe,’ and that the experiments by which its safety had been “demonstrated” in the lecture-room had proved entire ‘fallacious.’” On the ground practical experimentation results in better products, not inferior science. I know which lamp I would have preferred.

Software Experiments

Having established that the experiments are not only carried out by serious scientists in white lab coats but also sober mutton chopped engineers in frock coats, how about software developers in t-shirts and trainers? Does experimentation have any place when writing software?

Experimentation was once common practice among practitioners. In his book “Software Conflict” Robert L Glass relates the finding of an area of research called “protocol analysis” where observers would sit quietly and observe practitioners at work. They filmed them and then scrutinised the tapes to see how the design process worked. The process is familiar to all of us:

    • Understanding the problem
    • decomposing the problem into goals and objects
    • selecting and composing plans to solve the problem
    • implementing the plans
    • reflecting on the product and the process

http://www.developerdotstar.com/books/software_conflict_glass.html


It is the way in which these software engineers pursued the second step, the decomposition of the problem into goals and objects, that we might find surprising.

The designers, mentally and at lightening speed, were doing the following things:

    1. The constructed a mental model of a proposed solution to the problem.
    2. They mentally executed the model – in essence, running a simulation of the model – to see if it solved the problem.
    3. When they found that it didn’t (usually because it was too simple), they played the inadequate model back against those parts of the problem to see where it failed, and enhanced the model in those areas.
    4. They repeated steps 1-3 until they had a model that appeared to solve the problem.

http://www.developerdotstar.com/books/software_conflict_glass.html


Are you taking an Agile approach to your coding? If you are, then good for you. Now, are you following the approach described above? Are you constructing models, either in your head or as tests, and then testing them to destruction so that you can find where the model fails and improvement is required?

Or does your process more closely resemble the approach taken by those teams that were observed to fail:

These same researchers have explored the problems of people who are not very good at design. Those people tend to build representations of a design rather than models; they are then unable to perform simulation runs; and the result is they invent and are stuck with inadequate design solutions.

http://www.developerdotstar.com/books/software_conflict_glass.html


In Scrum I have observed the process where architects create complex, epic stories that span many months of work. These stories are a representation of the final system. They are broken down into smaller stories that are then fed to the developers in sprint size chunks. The developers are simply order takers, being fed their orders just in time. They have no opportunity to build and refine their own models through practical experimentation.

A recent thread on the Lean seems to indicate that I am not alone. John Herbert describes a similar situation.

Relegating developers to ‘order takers’ is exactly what I am describing. This is basically what the developers are saying, they have no room/time for creative thinking. That is at the root of my question. How do we achieve this balance? The dev team is on pretty strict 2 week sprint schedule, but it seems the 2 weeks has no time built in for anything other than getting the specific requirement developed. Where is the ‘alternate’ solutioning, or outside the box thinking achieved?? …

So where do we assign time for innovation without excluding any members of the team?

http://tech.groups.yahoo.com/group/leandevelopment/message/5920


Are you working on a tight sprint schedule working hard to deliver the backlog items promised? Are you in a position to say “Stop!” Can you declare the current approach a failure, and go back a few steps to pursue an alternative approach? Are such ideas unrealistic? Is is simply impractical? Then how come that was how successful programmers were able to work?

Our Software Craft

If we are Software Craftsmen then what is our craft? Is it the creation of code? No! If typing out lines of code is what we aspire to do how can we ever follow this principle of the Agile Manifesto?

Simplicity–the art of maximizing the amount of work not done–is essential.

http://agilemanifesto.org/principles.html

If the best code we will ever write is the code we avoid writing then how can writing code be our craft?

As Software Craftsmen we solve problems using code. To build those solutions we may use code, but first we must solve the problem.

Ask yourself: are you a Software Craftsman? If you are just taking orders rather than solving problems then you are not a craftsman, you are a factory worker. If you are learning, then is your mentor showing you how they solve the problem? Are they sharing their craft or simply giving orders?


Steam Punk Programming

I’ll be blogging about the Code Shares we’ve been running at the LJC and GDC soon.   In the meantime here’s my Steam Punk Programming lightening talk when I first announced the code share: http://skillsmatter.com/podcast/home/steam-punk-programming.


My Book Reviews on the LJC Book Club

I’m running the Book Club for the LJC and I’ve posted a couple of book reviews there:

I have a few more reviews in the pipeline for Cassandra, OSGi and Flow Based Programming.


Greg Wilson’s What We Actually Know About Software Development

There is an excellent talk from CUSEC given by Greg Wilson.  It’s called “What We Actually Know About Software Development, and Why We Believe It’s True.”

You can watch it here: http://vimeo.com/9270320

Here’s my notes on the talk.

The Lack of Evidence

11:10

Martin Fowler’s claims about Domain Specific Language

  • Using a DSL leads to:
    • Improved Productivity
    • Improved Communication
  • Although the article is an academic journal no citation or evidence is given.
  • A scotish verdict:
    • True
    • False
    • Not proven
  • Fowler claim’s that the debate is hampered because people don’t know how to do DSL’s properly.
  • Wilsno believes it is because of the low standards of proof.
  • We should have higher standards.
  • Things are getting better: there are more results from field studies.
  • Standards improve each year.

Estimation and Anchoring

14:50

Aranda & Easterbrook (2005): “Anchoring and Adjustment in Software Estimation”

Three groups are given a two page specification each. They are identical but for one paragraph. That one paragraph had a strong effect on the estimates provided.

Group Estimate Anchoring Paragraph
(lowball) 5.1 months “I admit I have no experience with software projects, but I guess this will take about 2 months to finish.”
Control 7.8 months “I’d like to give an estimate for this project myself, but I admit I have no experience estimating. We’ll wait for your calculation for an estimate.”
(highball) 15.4 months “I admit I have no experience with software projects, but I guess this will take about 20 months to finish.”

Rock Star Programmers and Poor Evidence Standards

18:18

“The best programmers are up to 28 times more productive than the worst”

  • Sackman, Erickson and Grant (1968) “Exploratory experimental studies comparing online and offline programming performance”
  • Study was designed to compare batch vs iterative approaches, not productivity.
  • Productivity measure was not explained.
  • Best vs worsts always exaggerates the effect. Standard deviation around the mean is better.
  • Just 12 programmers for an afternoon.
    • The next “major” study was 54 programmers for less than an hour.
  • In 1968 every programmer was self taught.

Improving Productivity

21:05

Look at the work of Lutz Prechelt

  • variations between programmers
  • effects of language
  • web programming framework

Studies are expensive and hard to do.

  • Not compared to the cost of drugs research.
  • A 5% productivity increase in a trillion dollar industry is worth a lot.

Two approaches of Pessimism and Optimism

23:40

Boem et al (1975) “Some Experience with Automated Aids to the Design of Large-Scale Reliable Software”

  1. Most errors are introduced during requirements analysis and design
  2. The later they are removed the more expensive it is to take them out.

Two approaches to this problem:

  • Pessimists
    • If we tackle the hump in the error injectino curve, fewer bugs will get to the expensive part of the fixing curve.”
  • Optimists
    • “If we do lots of short iterations, the total cost of fixing bugs will go down.”

Why are there so few women in software development?

25:55

Ceci & Williams (eds): Why Aren’t More Women in Science? Top Researchers Debate the Evidence.

There’s a review here: http://www.americanscientist.org/bookshelf/pub/changing-assumptions

28:22

Carol S Dwek’s essay (pdf) Is Math a Gift? Beliefs That Put Females at Risk 1)

  • Split the group into two groups.
    • Group One: “The tasks requires aptitude. ”
    • Group Two: “This task is entirely practice based.”
  • Group One does worse.
  • Even if the group are told that men have the aptitude more than women the men will still do worse.
  • When a difficulty is encountered the group members will conclude that they lack the aptitude and quit.

Improving Productivity (reprise)

30:58

  • For every 25% increase in problem complexity, there is a 100% increase in solution complexity. (Woodfield 1979)
    • Non-linear due to interaction effects.
    • Reducing problem complexity reduces the solution complexity.
    • Maybe this is why Agile works?
  • The two biggest causes of project failure are poor estimation and unstable requirements. (van Genuchten 1991 and many others)
    • There is no evidence that this is improving.
    • “I want you to make the web site more webish and not to webish. How long will that take?”
  • If more than 20-25% of a component has to be revised it’s better to rewrite it from scratch (Thomas et al, 1997)
    • Based on Flight Avionics. Very strict requirements.
    • Does it apply in other domains

Code Reviews

33:11

  • Rigorous inspections can remove 60-90% of errors before the first test is run. (Fagan 1975)
    • Code review is the best bug fixing method.
      • Better than unit tests.
      • Better than executing the code.
  • The first review and hour matter most. (Cohen 2006)
    • Having two people read the code is not economically effective.
    • How much code can you read in an hour?
    • A couple of hundred lines is as much as you can get through.
    • This supports the idea of making progress in little steps.
      • Open source projects reject large patches.

Conway’s Law

35:45

A system reflects the organisational structure that built it.

  • It was meant as a joke and turned out to be true. (Herbsleb et al 1999)
  • Physical distance doesn’t affect post-release fault rates but Distance in the organisational chart does.
    • Nagappan et all (2007) and Bird et al (2009)
    • Based on all the data from building Windows Vista. An enormous volume of data.
    • Searched for indicators of post release defect.
    • This goes against claims for the need for co-location.
    • Different managers with different goals has more impact than different continents.
  • Does this Explain why big companies produce such bad code.
    • Somebody from HP corporate headquarters: “We can’t just have people running around doing the right thing: there are rules!”

Scientific Progress

39:15

“Progress” sometimes means saying “Ooops.”

  • Science is different to religion because it accepts it’s mistakes.
  • For example, there appeared to be strong statistical evidence that cod metrics could predict post failure rates.
  • However, a later study showed that the co-relation was actually between code size and failure rates.
    • El Emam et al (2001): “The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics”
      • Most metrics’ values increase with code size.
      • If you do a double-barrelled correlation, the latter accounts for all the signal.
  • Sophisticated metrics are not needed: just a line count.
  • The results are disappointing, but raising the standards is a good thing.

Folk Medicine for Software

41:36

“Systematizing and synthesizing colloquial practice has been very productive in other disciplines.”

  • Look at what people are actually doing.
    • Take it back to the lab and investigate why it works.
  • The next decade of Software Engineering is looking at success and understanding why.

Beautiful Code

43:40

“What code is worth looking at, what code is worth reading?”

Beautiful Evidence

44:58

The Book Wilson was writing at the time. It’s now published: Making Software: What Really Works, and Why We Believe It

“What we know and why we think it’s true”

  • Knowledge transfer
  • A better textbook
  • Change the debate

A group of people will choose one thing they believe to be true about software development and they give the evidence as to why.

All text books for Software Development are crap.

  • Who really draws UML diagrams when thinking?

Software Craftsmanship

54:55

  • “It’s wondferful, but… show me the evidence”
  • Every ten years we need a new bandwagon so we can look innovative.

1) Wilson refers to it as Susan Dwek’s work
2) The name has already been stolen for a book by Edward Tuftle that is well wroth reading.


Analysts Guess Badly and Apple Is Blamed

There are two groups of people: those that do stuff and those that make guesses.

If the people who do stuff make a mistake, it is their fault.

If the people who guess are wrong, it the the fault of the people who do stuff.

Apple said Tuesday it sold more than 17 million iPhones in its fiscal fourth quarter ended Sept. 24, up from more than 14 million a year ago but lower than the 20 million or more that analysts had been expecting.

http://online.wsj.com/article/SB100...

Are the city analysts able to conclude that their estimates were wrong?   Of course not:

Some analysts said the results shouldn’t be seen as too negative. “We don’t think this is a slowdown in [market] share gains, it’s a pause,” said Brian Marshall, an analyst ISI Group.

ibid

What bump in the road?  You’re estimates were wrong.  Do a better job next time!

My guess is that Wall Street ]will continue to be blind to their own mistakes, no matter how many people try to point them out.


Woollard and the origins of Flow Production

Professor John Seddon has published an excellent article challenging the conventional wisdom around the need to achieve economies of scale.  I whole heartedly agree with the point being made, and the conclusion that is reached: “Economy of scale is a myth. Economy comes from flow.”

I would, however, like to challenge one assertion that is made: that the benefits of flow were first identified by Taiichi Ohno at Toyota.

Ohno minimised stock throughout the process, his ideal batch size being one. Whereas most manufacturers still focus on unit costs (and employ accountants for whom it is central to their doctrine: for example, inventory counts as ‘value’ on the balance sheet), Ohno focused on the flow of the work, confident that better flow would lead to lower overall costs. And so it did. His system would tolerate higher unit costs; it was not dependent on low costs per unit. What was critical was the availability of the part, not the cost – an affront to convention. Ohno was the first to demonstrate that greater economy comes from flow rather than scale.

His second and more profound challenge to convention was to put variety into the line, making different models in the same production line.

Why do we believe in economies of scale?

These principles were identified by Frank G. Woollard, who introduced flow at Morris and Austin.  In his 1954 book “The Principles of Mass and Flow Production” he set these principles out in detail.  On the subject of minimising stock and work in progress he writes on page 69:

One of the many advantages of flow production is that it reduces the inventory, that is the stock in stores and the work in progress…  The ideal objective… is an inventory kept to the lowest figure consistent with the maintenance of the flow of supplies to the production lines.  The reduction of working capital or the improvement of the liquid position due to a smaller inventory is not the only advantage; the lesser bin capacity absorbed, the smaller stores area required and the saving of double handling and of multiple paper-work are all very real economies.  Incidentally with flow production methods the reduction of inventory may be a as much as 75 per cent and even more.

Page  69 of Principles of Mass and Flow Production

In his article Professor Seddon presents a clear progression from Smith through Ford to Ohno, with manufacturing labouring under Smiths flawed theory until Ohno developed the Toyota Production System.  Woollard’s writing suggests a more tragic story, where these principles were well understood in the factories of Britain’s industrial heartland, only to be lost in the decades that followed.  It seems likely that the principles were preserved or possibly rediscovered by Toyota.

In his comments on page 84 regarding the production of multiple products on a single production line Woollard shows that this knowledge was well dispersed and not limited to Birmingham, England.

At one period, for instance, the Ford Motor Co. handled, in sequence, all current types of cars and trucks – without pause or intermission – on one assembly line.  To-day, in their assembly plant, the Austin Motor Co.handle three body types and right- and left- hand steering on the same assembly track.  One American concern assembles 500 different sizes and types of air cleaners on one conveyor line.  They do not, in this instance, come in sequence but by a changeover limited to a 24 hour run for any one model.  These mutations can be matched on the machine lines provided sufficient care and attention is given to jig and fixture design, and the method of changing tools is carefully studied.

Page  84 of Principles of Mass and Flow Production

Seddon observes that the true cost in the public-sector factories is about the human factors and not just the economic figures.

One cost that is apparent is sickness, absenteeism and staff turnover. Being treated as a ‘resource’ to be ‘optimised’ is not motivating. Nor is the realisation that it is impossible to help people solve their problems because of the need to work to the internal arbitrary measures. In some respects life in modern public-sector factories is little different to the conditions that created Ford’s ‘five-day man’[4]. Both HMRC and NHS Direct currently report low levels of staff morale.

Why do we believe in economies of scale?

The fact that human engagement is more important than machine utilisation was also observed by Woollard and his contemporaries.  He relates this on page 45.

The Scani-Vabis Company of Sweden… claim that on group production there is a reduction of the cycle time fo some 40 percent; that the morale of these lines has been improved; that the workmanship is of higher quality; that the training time has been reduced, and that a greater degree of expertness is acquired.  They say that this latter is largely due to the greater interest engendered by several jobs flowing through the line.  It is true that certain machines are working at 10 per cent to 15 per cent less efficiency than would be achieved on the batch method : but, that be all, it is a small price to pay for the general improvement due to the group production system which they are hoping to extend.

Page  45 of Principles of Mass and Flow Production

Woollard saw the potential for Flow production as another “complete turn in the industrial revolution” (page 15) but he was also aware of the dangers and limitations.  Regarding human matters the warning given by Woollard in his closing comments is poignant  considering the tragedy of the decades that were to follow.

On the human side it must also be watched, for – like all tools of management – it can be misused.  Flow production, with its obvious sequences and accurate timing, could be the instrument of a slave-driving tyranny, whereas properly employed it will promote discipline in an equitable and gentle, if irresistible, manner, making the daily task lighter for all.

Flow production is, in fact, a logical development that has tremendous advantages and when properly applied is of benefit to the whole community.  These methods may not promote any individual art but they can provide a common basis for a comfortable existence , and, when they relieve mankind of the more arduous labours – as ultimately they will – those who labour can, if they desire, follow their bent as individual craftsman in their extended leisure hours

Page  187 of Principles of Mass and Flow Production

Growing up as a child in the 70s Birmingham was dominated by the motor industry and strikes and conflict.  Now very little of that motor industry remains.

The 1970′s and its associated strikes and management problems decimated the industry. Japanese imports made matters worse and the car and motor cycle industry went through many mergers and closures. The great names such as BSA and Triumph lost ground against the Suzuki’s and Yamaha’s from Japan and the Datsun and Honda’s looked set to finish off what remained of the British Motor Industry.

http://www.birminghamuk.com/motorindustry.htm


Lean’s 98+% Failure Rate

Bob Marshall finds Lean’s 98% failure rate scary.  So he should!  Why would anybody want to attempt something when success is so unlikely?

Just as I was about to throw all my Lean books into the bin, the wise words of Vic Reeve’s came to mind.

“88.2% of Statistics are made up on the spot”

Less than 20% of Statistics are actually based on any facts!   It is possible, even probable, that this figure of 98+% is just a work of pure fiction.  I need to find out where this number comes from.

Some people point to an Evolving Excellence blog post as a source, where we find Bill Waddell doing the hansei:

I keep using the Clifford Ransom numbers – 98%+ lean failure rate – which most folks seem to think jives with our feel for the situation.

Here we see the figure attributed to Clifford Ransom, a man with fine Lean credentials.  It took some effort to find the original source.

Bill Waddell quotes the figure with full attribution in a Super Factory article, but the link provided there is now dead.  Fortunately, there is a title given: “Lean Manufacturing: Fat Cash Flow“.

The original interview with Clifford Ransom by Dr. Robert Hall, AME Target Editor, can be found in full on the BMA Inc website.  Here we find some context and the actual definition of failure.

Q: Do you track many lean manufacturers?

A: No. Very few companies have advanced with lean manufacturing until you can see the results financially — perhaps one or two percent at best. Another two-three percent are “getting there” —OK but not outstanding. Another 10-15 percent mostly “just talk lean.” The majority, 80 percent or so, don’t even have the buzz words straight. Unless I see three pieces of evidence, I do not consider a management to be serious about lean manufacturing. 1) They must proclaim that they are becoming lean. They can call it whatever they want, but intentions must be boldly stated in a vision that everyone can understand. 2) They must tie compensation to lean systems. You are not becoming lean if you reward people for doing unlean things. 3) They have to drive the company with lean metrics — time and inventory measures. You have to persist to see results. You won’t see much change in the financials for 12 to 18 months, sometimes longer. Clearly, confirming the sustainability of superior performance takes much longer — years. Most managements waffle around, make only a half-hearted attempt, and never get rid of the inconsistencies in their own leadership.

The figure of 98+% and the word “fail” do not occur here.  What is says is that only one or two percent, at best, advance with lean  manufacturing to a point where the results can be seen financially.

The inversion of 1-2% at best to 98+% is made by Bill Waddell in his article, where he paraphrases the interview:

But there is, according to Ransom, a 98%+ probability that whatever looks so lean on the shop floor makes no difference to the bottom line of the company.

So it turns out that this frightening statistic was not made up on the spot after all.  However, now we have the context there is clearly nothing to be afraid of.

98%+ lean failure rate

lean = “what looks like lean”, including attempts that “waffle arround” and make only a “half-hearted attempt”.

fail = “No difference to the bottom line of the company” significant enough to attract the interest of a Wall Street investor.

webinar with Clifford Ransom explains why so many Lean initiatives will fail in this way:

Lean is a terribly fragile thing. It is not robust, it can fail, it needs constant feeding and watering and reinforcing and scrutiny. And quite frankly I think it probably fails much more often than it succeeds. It’s counterintuitive, it’s innovative, it forces new ways of thinking. I think that empowering employees can be scary for both bosses and employees in some instances. There — I talk about the failure rate of Lean and I guess this slide would be better why companies fail at lean, or fail to even start at Lean. Change is threatening.

In the same webinar the 98+% figure is revised a little more favourably

I think there’s really only 5%who practice the art skilfully in a world class master practitioner kind ofway. I’m actually mellowing in my old age. I used to say only 2 to 3% of companies did it.

Perhaps Clifford Ransom’s third criteria for true Lean success explains why so many find it hard to attain:

3) They have to drive the company with lean metrics

Successful Lean is driven by the numbers, and it seems that people struggle to understand the numbers and their implications.  Why else would somebody use the following metrics to support a case for Lean’s failure?

Supplemental Evidence

a) “Only 2% of the companies reported achieving World Class manufacturing status.”

b) The 2007 IndustryWeek/Manufacturing Performance Institute Census of Manufacturers is a study of manufacturing metrics, management practices and financial results at the plant level.

17.8% say continuous improvement programs led to a major increase in productivity:

67.2% report some increase

12.4% report no change

2.2% report some decrease

0.5% report a major decrease

c) 10-20% of leaders in a typical organization are unable or unwilling to make the lean conversion.

Any approach that may lead to world class performance must be worth a try.  Larry Rubrich’s figure of 2% of companies achieving a “world class” manufacturing status is in line with Clifford Ransom’s observations.  He credits an Industry Week census as the source.

The results from another Industry Week census in 2007 provides a clear story for the success of continuous improvement, a pillar of Lean.  85% of companies succeeded in improving their productivity and nearly 20% achieved a dramatic improvement.  On the failing side we see 15% who reported no change or decline, and the 10-20% who are unwilling to even give it a try.

Implementing lean is like taking regular exercise.  It isn’t easy but done right it can benefit anybody.  My own abysmal failure to maintain an exercise regime does not change this.

Exceptional athletes have the dedication to take their exercise and training all the way to Olympic gold.  Their achievements do not make the rest of us failures, their achievements inspire us all to try harder.


Follow

Get every new post delivered to your Inbox.