Clippings from The Mythical Man-Month

Preface to the 20th Anniversary Edition

Preface to the First Edition

Chapter 1. The Tar Pit

  • Large-system programming is a tar pit that so many great and powerful beasts have sunk in it
  • The Programming Systems Product
    • A Program
    • A Programming Product
    • A Programming System
    • A Programming Systems Product
  • The Joy of the Craft
    1. Making things
    2. Making things that are useful to other people
    3. Fashioning complex puzzle-like objects of interlocking moving parts and watching them work in subtle cycles, playing out the consequences of principles built in from the beginning
    4. Always learning
    5. Working in such a tractable, flexible medium
  • The Woes of the Craft
    1. One must perform perfectly
    2. One rarely control the circumstances of his work, or even its goal
    3. Dependencies are often mal-designed, poorly implemented, incompletely delivered, and poorly documented
    4. Designing grand concepts is fun, but finding nitty little bugs is just work
    5. Debugging has a linear convergence (the last difficult bugs takes more time to find than the first)
    6. The product appears to be obsolete upon (or before) completion
      • As soon as one freezes a design, it becomes obsolete in terms of its concepts
      • The obsolescence of an implementation must be measured against other existing implementations, not against unrealized concepts

Chapter 2. The Mythical Man-Month

  • Scheduling tasks is hard
    1. Our techniques of estimating are poorly developed
    2. Our estimating techniques fallaciously confuse effort with progress, hiding the assumption that men and months are interchangeable
    3. Software managers often lack the courteous stubbornness because we are uncertain of our estimates
    4. Schedule progress is poorly monitored
    5. When schedule slippage is recognized, the natural and traditional response is to add manpower
  • Optimism
    • The first false assumption: all will go well (each task will hike only as long as it "ought" to take)
    • 3 stages of a creative activity (book, program, etc.) from The Mind of the Maker
      1. the idea
      2. the implementation
      3. the interaction
    • Because the medium (computer) is tractable, we expect few difficulties in implementation -> optimism
    • A large programming effort, however, consists of many tasks, some chained end-to-end. The probability that each will go well becomes vanishingly small.
  • The Man-Month
    • The man-month as a unit for measuring the size of a job is a dangerous and deceptive myth.
      • Cost does indeed vary as the product of the number of men and the number of months
      • Progress does not vary
    • Men and months are not interchangeable (for programming)
      1. They are interchangeable commodities only when a task can be partitioned among many workers with no communication among them
        • reaping wheat
        • picking cotton
      2. When a task cannot be partitioned because of sequential constraints, the application of more effort has no effect on the schedule
        • The bearing of a child takes nine months, no matter how many women are assigned.
      3. When a task can be partitioned but requires communication among the subtasks, the effort of communication must be added to the amount of work to be done
    • The added burden of communication is made up of two parts
      1. Training
        • Cannot be partitioned
      2. Intercommunication
        • O(n^2)
        • The added effort of communicating may fully counteract the division of the original task and bring us to this situation
    • Since software construction is inherently a systems effort, an exercise in complex interrelationships, communication effort is great, and it quickly dominates the decrease in individual task time brought about by partitioning.
  • Systems Test
    • Because of optimism, we usually expect the number of bugs to be smaller than it turns out to be. Therefore testing is usually the most mis-scheduled part of programming.
    • Rule of thumb
      • 1/3 planning
      • 1/6 coding
      • 1/4 component test and early system test
      • 1/4 system test, all components in hand
    • Failure to allow enough time for system test, in particular, is peculiarly disastrous.
      • Since the delay comes at the end of the schedule, no one is aware of schedule trouble until almost the delivery date.
      • Delay at this point has unusually severe financial, as well as psychological, repercussions.
  • Gutless Estimating
    • False scheduling to match the patron's desired date is much more common in our discipline than elsewhere in engineering.
  • Regenerative Schedule Disaster
    • Brooks's Law

      Adding manpower to a late software project makes it later.

    • One can derive schedules using fewer men and more months.
    • One cannot, however, get workable schedules using more men and fewer months.

Chapter 3. The Surgical Team

  • The Problem
    The problem with large teams
    Not everyone is a 10x programmer
    The problem with small, sharp team concept
    Too slow for really big systems
  • Mills's Proposal (The Surgical Team)

    Instead of each member cutting away on the problem, one does the cutting and the others give him every support that will enhance his effectiveness and productivity.

Chapter 4. Aristocracy, Democracy, and System Design

  • Conceptual Integrity
    • Software and Cathedrals
      • Most cathedrals show differences between parts built in different generations by different builders
      • Most software reflect conceptual disunity far worse than that of cathedrals (from the separation of design into many tasks done by many men)
    • Conceptual integrity is the most important consideration in system design
    • Chapter 4-7
      1. How is conceptual integrity to be achieved?
      2. Does not this argument imply an elite, or aristocracy of architects, and a horde of plebeian implementer whose creative talents and ideas are suppressed?
      3. How does one keep the architects from drifting off into the blue with unimplementable or costly specifications?
      4. How does one ensure that every trifling detail of an architectural specification get communicated to the implementer, properly understood by him, and accurately incorporated into the product?
  • Achieving Conceptual Integrity
    • The purpose of a programming system is to make a computer easy to use
    • The ratio of function to conceptual complexity is the ultimate test of system design
      • Neither function alone nor simplicity alone defines a good design
  • Aristocracy and Democracy
    • Dilemma
      Conceptual integrity
      The design must proceed from one mind (or a very small number of minds)
      Schedule pressures
      The system building needs many hands
    • Solution
      1. Chapter 3. The Surgical Team
      2. Separate architecture (UI) and implementation (Details)

        Where architecture tells what happens, implementation tells how it is made to happen

        • Architecture -> The complete and detailed specification of the user interface (UI)
        • The setting of external specifications is not more creative work than the designing of implementations.

Chapter 5. The Second-System Effect

  • An architect's first work is apt to be spare and clean
    1. He knows the doesn't know what he's doing
    2. He does it carefully and with great restraint
  • The Second-System Effect
    • The general tendency is to over-design the second system
      • OS/360 is a prime example of the second-system effect
    • Another tendency is to refine obsoleted techniques
  • How to avoid the second-system effect
    • Architect
      1. Be conscious of the peculiar hazards of the second-system
      2. Exert extra self-discipline to
        1. avoid functional ornamentation
        2. avoid extrapolation of functions that are obviated by changes in assumptions and purposes
    • Project Manager
      1. Insist on a senior architect who has at least two systems under his belt
      2. Stay aware of the special temptations

Second-system effect - Wikipedia

The tendency of small, elegant, and successful systems, to be succeeded by over-engineered, bloated systems, due to inflated expectations and overconfidence.

Chapter 6. Passing the Word

How shall the manager ensure that everyone hears, understands, and implements the architects' decisions? (A successful solution for the System/360 hardware design effort)

  1. Written Specifications - the Manual
    • The external specification of the product
    • The architect must
      1. always be prepared to show an implementation for any feature he describes
      2. but not attempt to dictate the implementation
    • The style must be precise, full, and accurately detailed.
  2. Formal Definitions
    • Future specifications will consist of both a formal definition and a prose definition
      • Formal Definitions are precise
      • Prose Definitions can explain why
    • But there can be only one primary standard
    • An implementation can also serve as a formal definition
      • Advantages
        1. All questions can be settled unambiguously by experiment
        2. Debate is never needed
        3. Answers are always as precise as one wants
        4. Answers are always correct, by definition
      • Disadvantages
        1. The implementation may over-prescribe even the externals
          1. Side effects
          2. Not only what it must do, but also how to do it
        2. Sometimes give unexpected and unplanned answers (Vim and its Emulators)
        3. the use of an implementation as a formal definition is peculiarly susceptible to confusion as to whether the prose description or the formal description is in fact the standard.
  3. Direct Incorporation
  4. Conferences and Courts
    1. Weekly half-day conference of all the architects, plus official representatives of the implementers, and the market planners
    2. Semi-Annual supreme court sessions (two weeks every six months)
      • To solve issues, appeals, or disgruntlements
  5. Multiple Implementations
    • when you have enough time and manpower
  6. The Telephone Log
    • Keep a text log for questions/issues
  7. Product Test

Chapter 7. Why Did the Tower of Babel Fail?

  • Where did they lack?
    1. Communication
      • lack of communication led to disputes, bad feelings, and group jealousies.
    2. Organization (1's consequence)
  • Communication in the Large Programming Project
    • A formal project work book must be started at the beginning
    • The Project Workbook
      All the documents of the project
      1. Objectives
      2. External specifications
      3. Interface specifications
      4. Technical standards
      5. Internal specifications
      6. Administrative memorandum
      1. Technical prose is almost immortal
      2. Ensure that relevant information gets to all the people who need it (Control of the distribution of information )
      1. Each programmer should see all the material
      2. Timely updating
  • Organization in the Large Programming Project
    • The purpose of organization is to reduce the amount of communication and coordination necessary
    • The means by which communication is obviated
      1. division of labor
      2. specialization of function

Chapter 8. Calling the Shot

  • Teams are only realizing 50 percent of the working week as actual programming and debugging time (20 hours per week)
  • Programming productivity may be increased as much as five times when a suitable high-level language is used

Chapter 9. Ten Pounds in a Five-Pound Sack

  • Program Space as Cost
    • Since size is such a large part of the user cost of a programming system product, the builder must set size targets, control size, and devise size-reduction techniques
  • Size Control
    1. Budget all aspects of size
    2. Define exactly what a module must do when you specify how big it must be
  • Space Techniques
  • Representation (Data Structure) Is the Essence of Programming
    • Almost always faster algorithms are the result of stategic breakthrough rather than tactical cleverness.
    • Much more often, strategic breakthrough will come from redoing the representation of the data or tables.

Chapter 10. The Documentary Hypothesis

  • Why Formal Documents?
    1. writing the decisions down is essential
    2. the documents will communicate the decisions to others
    3. a manager's documents give him a data base and checklist.

Chapter 11. Plan to Throw One Away

  • Pilot Plants and Scaling Up
    • Chemical engineers learned long ago that a process that works in the laboratory cannot be implemented in a factory in only one step.
    • Delivering that throwaway to customers buys time, but it does so only at the cost of agony for the user, distraction for the builders while they do the redesign, and a bad reputation for the product that the best redesign will find hard to live down.
    • Plan to throw one away; you will, anyhow.
  • The Only Constancy Is Change Itself
  • Plan the System for Change
    • Most important is the use of a high-level language and self-documenting techniques so as to reduce errors induced by changes.
  • Plan the Organization for Change
    • The reluctance to document designs comes from the designer's reluctance to commit himself to the defense of decisions which he knows to be tentative

      By documenting a design, the designer exposes himself to the criticisms of everyone, and he must be able to defend everything he writes. If the organizational structure is threatening in any way, nothing is going to be documented until it is com- pletely defensible.

    • Structuring an organization for change is much harder than designing a system for change.
    • Management structures also need to be changed as the system changes.
  • Plan to Throw One Away
    • The fundamental problem with program maintenance is that fixing a defect has a substantial (20-50 percent) chance of introducing another.
      1. unless the structure is pure or the documentation very fine, the far-reaching effects of the repair will be overlooked.
      2. the repairer is usually not the man who wrote the code, and often he is a junior programmer or trainee.
    • Program maintenance requires far more system testing per statement written than any other programming.
  • One step forward and one step back
    • Less and less effort is spent on fixing original design flaws; more and more is spent on fixing flaws introduced by earlier fixes.

Chapter 12. Sharp Tools

A good workman is known by his tools

  • What are the tools about which the manager must philosophize, plan, and organize?
    1. Computer Facility
    2. Operating System
    3. Language
      1. High-level language
      2. Interactive programming
    4. Utilities
    5. Debugging aids
    6. Test-case generators
    7. Text-processing system (for documentation)

Chapter 13. The Whole and the Parts

  • This Chapter
    1. How does one build a program to work?
    2. How does one test a program
    3. How does one integrate a tested set of component programs into a tested and dependable system?
  • Designing the Bugs Out
    • Bug-proofing the definition
      • The most pernicious and subtle bugs are system bugs arising from mismatched assumptions made by the authors of various components
      • Conceptual integrity can solve this issue
      • Careful function definition
      • Careful specification
    • Testing the specification

      They won't tell you they don't understand it; they will happily invent their way through the gaps and obscurities.

    • Top-down design
      • Program Development by Stepwise Refinement
        • Identify design as a sequence of refinement steps (refactoring?)
          1. Sketch a rough task definition and a rough solution method that achieves the principal result
          2. Examine the definition more closely to see how the result differs from what is wanted
          3. Take the large setps of the solution and break them down into smaller steps
        • During this process, developer identifies modules
        • Use as high-level annotation as is possible at each step, exposing the concepts and concealing the details until further refinement becomes necessary
      • How top-down design avoids bugs
        1. The clarity of structure and representation makes the precise statement of requirements and functions of the modules easier
        2. The partitioning and independence of modules avoids system bugs
        3. The suppression of detail makes flaws in the structure more apparent
        4. The design can be tested at each of its refinement steps
          • So testing can start earlier
          • testing can focus on the proper level of detail at each step
      • It's much easier to see exactly when and why one should throw away a gross design and start over
      • Many poor systems come from an attempt to salvage a bad basic design and patch it with all kinds of cosmetic relief. Top-down design reduces the temptation
    • Structured programming
      • Do not use goto
      • Think about the control structures of a system as control structures, not as individual branch statements.
  • Component Debugging (The cycle of debugging procedures)
    1. On-machine debugging
    2. Memory dumps
    3. Snapshots
    4. Interactive debugging
  • System Debugging
    • System debugging will take longer than one expects
    • Its difficulty justifies a thoroughly systematic and planned approach
      1. Use debugged components
        • The sooner one puts the pieces together, the sooner the system bugs will emerge
        • One does not know all the expected effects of known bugs
      2. Build plenty of scaffolding (programs and data built for debugging purpose)
        1. dummy component (fake/mock/stub)
        2. miniature file
          • dummy file
        3. auxiliary programs
          • generators for test data (factories)
          • special analysis printouts
          • cross-reference table analyzers
      3. Control changes
      4. Add one component at a time
        • Assume there will be lots of bugs
        • Plan an orderly procedure for snaking bugs out
      5. Quantize updates

Chapter 14. Hatching a Catastrophe

  • Day-by-day slippage is harder to recognize, harder to prevent, harder to make up
    • Each one only postpones some activity by a half-day or a day.
    • And the schedule slips, one day at a time.
  • Milestones or Millstones?
    • Milestones must be concrete, specific, measurable events, defined with knife-edge sharpness.
    • Two interesting studies of Estimating behavior show that:
      1. Estimates of the length of an activity, made and revised care- fully every two weeks before the activity starts, do not signifi- cantly change as the start time draws near, no matter how wrong they ultimately turn out to be.
      2. During the activity, overestimates of duration come steadily down as the activity proceeds.
      3. Underestimates do not change significantly during the activity until about three weeks before the scheduled completion.
  • "The Other Piece Is Late, Anyway"
    • Critical-path scheduling
    • PERT chart
  • Under the Rug
    • When a first-line manager sees his small team slipping behind, he is rarely inclined to run to the boss with this woe.
    • The first-line manager's interests and those of the boss have an inherent conflict here.

      The first-line manager fears that if he reports his problem, the boss will act on it. Then his action will preempt the manager's function, diminish his authority, foul up his other plans. So as long as the manager thinks he can solve it alone, he doesn't tell the boss.

    • Two rug-lifting techniques
      1. Reducing the role conflict
        1. The boss must distinguish between action information and status information
        2. The boss must discipline himself
          1. not to act on problems his managers can solve
          2. not to act on problems when he is explicitly reviewing status
        3. The boss can label meetings, reviews, conferences, as status-review meetings versus problem-action meetings, and controls himself accordingly.
      2. Yanking the rug off
        • It is necessary to have review techniques by which the true status is made known, whether cooperatively or not.
        • A report showing milestones and actual completions is the key document.
          1. Everyone knows the questions
          2. The component manager should be prepared to explain why it's late, when it will be finished, what steps he's taking, and what help
  • Plans and Controls team

Chapter 15. The Other Face

  • The Other Face
    • A computer program is a message from a man to a machine
    • But a written program has another face, that which tells its story to the human user.
      • Even if you are not cooperating, memory will fail the author-user, and you will require refreshing on the details of his handiwork.
    • The other face to the user is fully as important as the face to the machine.
    • The "how" of good documentation
  • What Documentation Is Required?
    • To use a program
      1. Purpose
      2. Environment
      3. (Input) domain and (output) range
      4. Functions realized and algorithms used
      5. Input-output formats
      6. Operating instructions
      7. Options
      8. Running time
      9. Accuracy and checking
    • To believe a program (test cases)
      1. Mainline cases (chief functions for commonly encountered data)
      2. Barely legitimate cases (probe the edge of the input data domain)
      3. Barely illegitimate cases (probe the domain boundary from the other side)
    • To modify a program
      1. A flow chart or subprogram structure graph
      2. Complete descriptions of the algorithms used, or self references to such descriptions in the literature
      3. An explanation of the file structures
      4. And overview of the data pass structure (data flow)
      5. A discussion of modifications contemplated in the original design, the nature and location of hooks and exits
  • The Flow-Chart Curse
    • Flow charts show the decision structure of a program, which is only one aspect of its structure.
    • The one-page flow chart for a substantial program becomes essentially a diagram of program structure, and of phases or steps.
    • The detailed blow-by-blow flow chart, however, is an obsolete nuisance, suitable only for initiating beginners into algorithmic thinking.
    • Flow charting is more preached than practiced.
  • Self-Documenting Programs
    • As a principal objective, we must attempt to minimize the burden of documentation, the burden neither we nor our predecessors have been able to bear successfully.
    • An approach
      1. Use the parts of the program that have to be there anyway (symbol names), for programming language reasons, to carry as much of the documentation as possible
      2. Use space and format as much as possible to improve readability and show subordination and nesting
      3. Insert the necessary prose documentation as paragraphs of comment
        • Paragraph comments are better than line-by-line comments because they usually give inteligibility and overview to the whole thing
    • Write documentations when the program is first written
    • Why not?
      1. the increase in the size of the source code
      2. more keystrokes

Chapter 16. No Silver Bullet - Essence and Accident in Software Engineering

There is no single development, in either technology or management technique, which by itselfpromises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

  • No Silver Bullet: Essence and Accidents of Software Engineering
  • Silver Bullet

    Of all the monsters that fill the nightmares of our folklore, none terrify more than werewolves, because they transform unexpectedly from the familiar into horrors. For these, one seeks bullets of silver that can magically lay them to rest.

    • The familiar software project, at least as seen by the nontechnical manager, has something of this character
  • Essential Difficulties
    1. Not that software progress is so slow, but that computer hardware progress is so fast (the fastest in human history)
    2. The difficulties of software technology
      • Essence (inherent)

        I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation.

        • Complexity
          • DRY
            • Software entities are more complex for their size than perhaps any other human construct because no two parts are alike
            • If they are, we make the two similar parts into a subroutine
          • States
            • Software systems have orders-of-magnitude more states than computers do.
          • Lead to many problems
            1. Communication
              Function complexity
              Hard to use
              Structure complexity
              Hard to extend
              Structure complexity
              Unvisualized states
            2. Management
        • Conformity
        • Changeability
          • Software is constantly subject to pressures for change
            1. The software in a system embodies its function, and the function is the part that most feels the pressures of change
            2. Software can be changed more easily (than buildings, cars, computers, etc.)
          • All successful software gets changes
            1. People try it in new cases at the edge of, or beyond, the original domain
            2. Successful software also survives beyond the normal life of the hardware for which it is first written
          • Software is embedded in a cultural matrix of applications, users, laws, and hardware. These all change continually
        • Invisibility
          • Software constitute not one, but several, general directed graphs, superimposed one upon another
          • This lack of visualization not only impedes the process of design within one mind, it severely hinders communication among minds
      • Accidents (not inherent)
  • Past Breakthroughs Solved Accidental Difficulties

    Breakthroughs Accidental Difficulties
    High-level languages Machine
    Time-sharing Slow turn-around
    Unified programming environments Using programs together
    • High-level languages
      • It frees a program from much of its accidental complexity of the machine.
      • The most a high-level language can do is to furnish all the constructs the programmer imagines in the abstract program
      • At some point the elaboration of a high-level language becomes a burden that increases, not reduces, the intellectual task of the user who rarely uses the esoteric constructs
    • Time-sharing
      • Time-sharing preserves immediacy, and hence enables us to maintain an overview of complexity
      • The slow turnaround of batch programming means that we inevitably forget the minutiae, if not the very thrust, of what we were thinking when we stopped programming and called for compilation and execution.
        • Slow turn-around, like machine-language complexities, is an accidental difficulty of the software process.
      • The principal effect is to shorten system response time.
    • Unified programming environments
      • They attack the accidental difficulties of using programs together, by providing integrated libraries, unified file formats, and pipes and filters
  • Hopes for the Silver
    1. Another high-level language? (Like Ada)
      • Ada, after all, is just another high-level language
      • The biggest payoff from high-level languages came from the first transition
    2. OOP
      • Abstract data types and hierarchical types each removes one more accidental difficulty from the process
      • The complexity of the design itself is essential
    3. Artificial intelligence
      • 强人工智能是 P 还是 NP - Google Search
      • The hard thing about building software is deciding what to say, not saying it.
      • Expert systems (Linter?)
        • suggesting interface rules
        • advising on testing strategies
        • remembering bug-type frequencies
        • offering optimization hints
        • The most powerful contribution of expert systems will surely be to put at the service of the inexperienced programmer the experience and accumulated wisdom of the best program- mers.
      • "Automatic" programming
        • the generation of a program for solving a problem from a statement of the problem specifications.
        • in most cases it is the solution method, not the problem, whose specification has to be given.
        • Problems can be solved by using generators
          1. The problems are readily characterized by relatively few parameters
          2. There are many known methods of solution to provide a library of alternatives
          3. Extensive analysis has led to explicit rules for selecting solution techniques, given problem parameters.
    4. Graphical programming
      1. The flow chart is a very poor abstraction of software structure
        • it has proved to be essentially useless as a design tool
        • programmers draw flow charts after, not before, writing the programs they describe.
      2. The screens of today are too small, in pixels, to show both the scope and the resolution of any serious detailed software diagram
      3. Software is very difficult to visualize
    5. Program verification
      • Program verification does not promise, however, to save labor
      • Program verification does not mean error-proof programs.
        • Mathematical proofs also can be faulty.
        • So whereas verification might reduce the program-testing load, it cannot eliminate it.
      • Even perfect program verification can only establish that a program meets its specification
        • The hardest part of the software task is arriving at a complete and consistent specification, and much of the essence of building a program is in fact the debugging of the specification.
    6. Environments and tools
      • The most IDEs promise is freedom from syntactic errors and simple semantic errors.
      • Perhaps the biggest gain yet to be realized in the programming environment is the use of integrated database systems to keep track of the myriads of details that must be recalled accurately by the individual programmer and kept current in a group of collaborators on a single system.
    7. Workstations
      • A factor of 10 in machine speed would surely leave think-time the dominant activity in the programmer's day.
  • Promising Attacks on the Conceptual Essence
    1. Buy vs. Build
      • The most radical possible solution for constructing software is not to construct it at all
      • The cost of software has always been development cost, not replication cost
      • The use of n copies of a software system effectively multiplies the productivity of its developers by n
      • The key issue is applicability
        • The big change has been in the hardware/software cost ratio.
          • The buyer of a $2-million machine in 1960 felt that he could afford $250,000 more for a customized payroll program
          • Buyers of $50,000 office machines today cannot conceivably afford customized payroll programs
    2. Requirements refinement and rapid prototyping
      • The hardest single part of building a software system is deciding precisely what to build.
      • The clients do not know what they want.
        1. They usually do not know what questions must be answered,
        2. They almost never have thought of the problem in the detail that must be specified.
      • It is really impossible for clients, even those working with software engineers, to specify completely, precisely, and correctly the exact requirements of a modern software product before having built and tried some versions of the product they are specifying.
      • The purpose of the prototype is to make real the conceptual structure specified, so that the client can test it for consistency and usability.
    3. Incremental development
      • Grow, not build, software
      • We freely use other elements of the metaphor, such as
        1. specifications
        2. assembly of components
        3. scaffolding
      • Top-down Design
      • Teams can grow much more complex entities in four months than they can build
    4. Great designers
      • We can get good designs by following good practices instead of poor ones.
      • The very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort.
      • Software systems that have excited passionate fans are those that are the products of one or a few designing minds, great designers.
        • Unix
        • Pascal
        • Smalltalk
        • FORTRAN
        • Modulo
      • I think the most important single effort we can mount is to develop ways to grow great designers.
      • Each software organization must determine and proclaim that great designers are as important to its success as great managers are, and that they can be expected to be similarly nurtured and rewarded.
      • How to grow great designers?
        • Systematically identify top designers as early as possible. The best are often not the most experienced.
        • Assign a career mentor to be responsible for the develop- ment of the prospect, and keep a careful career file.
        • Devise and maintain a career development plan for each prospect, including carefully selected apprenticeships with top designers, episodes of advanced formal education, and short courses, all interspersed with solo design and technical leadership assignments.
        • Provide opportunities for growing designers to interact with and stimulate each other.

Chapter 17. "No Silver Bullet" Refired

  • accidental complexity -> incidental/appurtenant complexity
    mental crafting of the conceptual construct
    its implementation process
  • If the accidental part of the work is less than 9/10 of the total, shrinking it to zero will not give an order of magnitude productivity improvement
  • Complexity is by levels

    Most of the complexities which are encountered in systems work are symptoms of organizational malfunctions.

    NSB advocates adding necessary complexity to a software system:

    • Hierarchically, by layered modules or objects
    • Incrementally, so that the system always works.
  • Jones's Point—Productivity Follows Quality

    Focus on quality, and productivity will follow.

  • Object-Oriented Programming -- Will a Brass Bullet Do?
    • Buildings with bigger pieces
      • Views of OOP
        1. Modularity
        2. Encapsulation
        3. Inheritance
        4. Strong abstract data-typing
      • All these disciplines can be had without taking the whole Smalltalk or C++ package
    • Why has object-oriented technique grown slowly
      • The C++ Report

        The problem is that programmers in OO have been experimenting in incestuous applications and aiming low in abstraction, instead of high.

        For example, they have been building classes such as linked-list or set instead of classes such as user-interface or radiation beam or finite-element model.

      • OO is a type of design

        OO has been tied to a variety of complex languages.

        Instead of teaching people that OO is a type of design, and giving them design principles, people have taught that OO is the use of a particular tool.

        We can write good or bad programs with any tool.

        Unless we teach people how to design, the languages matter very little. The result is that people do bad designs with these languages and get very little value from them. If the value is small, it won't catch on.

    • Front-loaded costs, down-stream benefits
      • Switching to OO system costs more when started
      • The big benefits pay off during successor building, extension, and maintenance activities

        Object-oriented techniques will not make the first project development any faster, or the next one. The fifth one in that family will go blazingly fast.

      • Betting real up-front money for the sake of projected but iffy benefits later is what investors do every day.
      • In many programming organizations, however, it requires real managerial courage, a commodity much scarcer than technical competence or administrative proficiency. (Tech Debt)
  • What About Reuse?

    We conjecture that barriers to reuse are not on the producer side, but on the consumer side. If a software engineer, a potential consumer of standardized software components, perceives it to be more expensive to find a component that meets his need, and so verify, than to write one anew, a new, duplicative component will be written. Notice we said perceives above. It doesn't matter what the true cost of reconstruction is.

    • Learning Large Vocabularies — A Predictable but Unpredicted Problem for Software Reuse
      • The higher the level at which one thinks, the more numerous the primitive thought-elements one has to deal with.
      • Whether we do this (reusing) by object class libraries or procedure libraries, we must face the fact that we are radically raising the sizes of our programming vocabularies.
      • How people acquire language
        • People learn in sentence contexts, so we need to publish many examples of composed products, not just libraries of parts.
        • People do not memorize anything but spelling. They learn syntax and semantics incrementally, in context, by use.
        • People group word composition rules by syntactic classes, not by compatible subsets of objects.

Chapter 18. Propositions of The Mythical Man-Month: True or False?

Chapter 19. The Mythical Man-Month: after 20 years

  • Why Is There a Twentieth Anniversary Edition?
    1. The software development discipline has not advanced normally or properly.
      • Chapter 16. No Silver Bullet - Essence and Accident in Software Engineering
    2. The Mythical Man-Month is only incidentally about software but primarily about how people in teams make things.
      • Managing a software project is more like other management than most programmers initially believe
  • What was right when written, and still is
    • The Central Argument: Conceptual Integrity and the Architect
      • Conceptual integrity
      • The architect
      • Separation of architecture from implementation and realization
      • Recursion of architects
    • The Second-System Effect: Featuritis and Frequency-Guessing
      • Featuritis
        • The besetting temptation for the architect of a general purpose tool is to overload the product with features of marginal utility, at the expense of performance and even of ease of use.
        • Frequently, the original system architect has gone on to greater glories, and the architecture is in the hands of people with less experience at representing the user's overall interest in balance.
      • Defining the user set
        • Each member of the design team will surely have an implicit mental image of the users, and each designer's image will be different.
        • Writing down the attributes of the expected user set, including:
          1. Who they are
          2. What they need
          3. What they think they need
          4. What they want
      • Frequencies
        • For any software product, any of the attributes of the user set is in fact a distribution, with many possible values, each with its own frequency.
        • Write down explicit guesses for the attributes of the user set. It is far better to be explicit and wrong than to be vague.
          1. The process of carefully guessing the frequencies will cause the architect to think very carefully about the expected user set.
          2. Writing the frequencies down will subject them to debate, which will illuminate all the participants and bring to the surface the differences in the user images that the several designers carry
          3. Enumerating the frequencies explicitly helps everyone recognize which decisions depend upon which user set properties
      • 2 Second-System in this book
        • The "second" system described in Chapter 5 is the second system fielded, the follow-on system that invites added function and frills
        • The "second" system in Chapter 11 is the second try at building what should be the first system to be fielded. It is built under all the schedule, talent, and ignorance constraints that characterize new projects—the constraints that exert a slimness discipline.
  • The Triumph of the WIMP Interface
    • WIMP
      • Windows
      • Icons
      • Menus
      • Pointing interface
    • Conceptual integrity via a metaphor
      • The WIMP is a superb example of a user interface that has conceptual integrity, achieved by the adoption of a familiar mental model
      • The reliable interpretation of free-form generated English commands is beyond the present state of the art
        • They wisely picked up from the usual desktop its one example of command selection—the printed buck slip, on which the user selects from among a constrained menu of commands whose semantics are standardized.
    • Command utterances and the two-cursor problem
      • Two-cursor problem: One cursor is having to do the work of two
        1. pick an object in the desktop part of the window;
        2. pick a verb in the menu portion
      • A brilliant solution: Use one hand on the keyboard to specify verbs and the other hand on a mouse to pick nouns
      • User power versus ease of use
        • One of the hardest issues facing software architects is exactly how to balance user power versus ease of use.
        • The high-frequency menu verbs each have single-key + command-key equivalents, mostly chosen so that they can easily be struck as a single chord with the left hand.
      • Incremental transition from novice to power user
    • The fate of WIMP: Obsolescence
      • Pointing will still be the way to express nouns as we command our machines;
      • Speech is surely the right way to express the verbs.
  • Don't Build One to Throw Away - The Waterfall Model Is Wrong!
    1. it assumes a project goes through the process once
      • The waterfall model assumes the mistakes will all be in the realization,
      • Thus that their repair can be smoothly interspersed with component and system testing.
      • One might discard and redesign the first system piece by piece, rather than in one lump
      • The waterfall model puts system test, and therefore by implication user testing, at the end of the construction process.
    2. It assumes one builds a whole system at once
      • combining the pieces for an end-to-end system test after all of the implementation design, most of the coding, and much of the component testing has been done.
      • There has to be upstream movement

        Designing the implementation will show that some architectural features cripple performance; so the architecture has to be reworked.

  • An Incremental-Build Model Is Better - Progressive Refinement
    1. Building an end-to-end skeleton system

      Harlan Mills, working in a real-time system environment, early
      advocated that we should build the basic polling loop of a real- time
      system, with subroutine calls (stubs) for all the functions, but only
      null subroutines. Compile it; test it. It goes round and round, doing
      literally nothing, but doing it correctly.
      • At every stage we have a running system
      • Since we have a working system at all times
        1. we can begin user testing very early
        2. we can adopt a build-to-budget strategy that protects absolutely against schedule or budget overruns (at the cost of possible functional shortfall).
    2. Parnas Families
      • Designing a software product as a family of related products
      • To define their (both lateral extensions and succeeding versions) function or platform differences so as to construct a family tree of related products
      • Put near its root those design decisions that are less likely to change.
    3. Microsoft's "Build Every Night" Approach (CI/CD)

      After we first ship, we will be shipping later versions that add more function to an existing, running product. Why should the initial building process be different? Beginning at the time of our first milestone [where the march to first ship has three intermediate milestones] we rebuild the developing system every night [and run the test cases]. The build cycle becomes the heartbeat of the project. Every day one or more of the programmer-tester teams check in modules with new functions. After every build, we have a running system. If the build breaks, we stop the whole process until the trouble is found and fixed. At all times everybody on the team knows the status.

      It is really hard. You have to devote lots of resources, but it is a disciplined process, a tracked and known process. It gives the team credibility to itself. Your credibility determines your morale, your emotional state.

    4. Incremental-Build and Rapid Prototyping
  • Parnas Was Right, and I Was Wrong about Information Hiding
    • Programmers are most effective if shielded from, not exposed to, the innards of modules not their own
    • Information hiding is the only way of raising the level of software design.
    • If we can limit design and building so that we only do the putting together and parameterization of such chunks from pre-built collections, we have radically raised the conceptual level, and eliminated the vast amounts of work and the copious opportunities for error that dwell at the individual statement level.
    • 3 steps of information-hiding
      1. Define a module as a software entity with its own data model and its own set of operations
      2. The upgrading of the module into an abstract data type (interface!)
        • The abstract data type provides a uniform way of thinking about and specifying module interfaces, and an access discipline that is easy to enforce.
      3. OOP
        • inheritance
    • Modules are not just programs, but instead are program products
    • Some people are vainly hoping for significant module reuse without paying the initial cost of building product-quality modules—generalized, robust, tested, and documented.
  • How Mythical Is the Man-Month? Boehm's Model and Data
    • Adding more people to a late project always makes it more costly, but it does not always cause it to be completed later
    • New people added late in a development project must be team players willing to pitch in and work within the process, and not attempt to alter or improve the process itself!
    • The work must be repartitioned, a process I have often found to be non trivial.
  • People Are Everything (Well, Almost Everything)
    • The quality of the people on a project, and their organization and management, are much more important factors in success than are the tools they use or the technical approaches they take.
    • Peopleware
      • Peopleware: Productive Projects and Teams
        • The manager's function is not to make people work, it is to make it possible for people to work.
  • The Power of Giving Up Power
    • Creativity comes from individuals and not from structures or processes
    • The Principle of Subsidiary Function
  • The State and Future of Software Engineering
    • Chemical engineering
      1. rules of thumb
      2. empirical nomograms
      3. formulas for designing particular components
      4. mathematical models for heat transport
      5. mass transport
      6. momentum transport in single vessels
    • Software engineering is merely immature as chemical engineering was in 1945
    • This complex craft will demand our continual development of the discipline, our learning to compose in larger units, our best use of new tools, our best adaptation of proven engineering management methods, liberal application of common sense, and a God-given humility to recognize our fallibility and limitations.