Pages

Sunday, December 30, 2018

F.W. Taylor



How many project managers are still laboring with the aftermath of Fredrick Winslow Taylor, more popularly known as F.W. Taylor? You might ask: Who was Taylor? F.W. Taylor was one of the first to study business systematically. He brought 'Taylorism" into the business culture in the years leading up to World War I. By 1915, his ideas were considered quite advanced, and they had significant impact well into the mid-20th century.

Taylor was a mechanical engineer who worked early-on in a metal products factory. Appalled at the seemingly disorganized and informal management of the time, and equally distressed by the costly throughput of poorly motivated workers laboring at inefficient processes , Taylor set about to invent "scientific management", a revolutionary movement that proposed the reduction of waste through the careful study of work.

Taylor came up with the original 'time-and-motion' studies, perhaps one of the first attacks on non-value work. Peter Drucker, a management guru par excellence who coined the term 'knowledge worker', has ranked Taylor, along with Darwin and Freud, as one of the seminal thinkers of modern times. ["Frederick Taylor, Early Century Management Consultant", The Wall Street Journal Bookshelf, June 13, 1997 pg. A1].

The essence of Taylorism is an antithesis to agile principles but nonetheless instructive. Counter to what we know today, Taylor believed that workers are not capable of understanding the underlying principles and science of their work; they need to be instructed step-by-step what to do and how to do it; and nothing is left to chance or decision. Rigid enforcement is required.

However, Taylor was close to the mark with his doctrine about value-adding work. According to Taylor, managers must accept that they have responsibilities to design efficient and effective process and procedures. Waste must be eliminated! Every action requires definition and a means to measure results.

Taylor was not well like by workers and it's not hard to see why. But Taylor's ideas and practices brought great efficiencies and profitability while providing customers with products of predictability of quality. Taylor most important legacy is perhaps his ideas of scientific management and the importance of process definition and process management as a means to control product and productivity.

I like what Steve McConnell says about quality and the software relationship. Building off Taylor's ideas of 'do it once right', though he does not mention Mr. Taylor, McConnell, author of the respected book "Code Complete" states the " general principle of software quality is .. that improving quality reduces development costs .... the best way to improve productivity is to reduce the time reworking..."

Kent Beck, writing in his book "Extreme Programming Explained - Second Edition" has a pretty strong idea about the legacy of Taylorism and its lingering effects on the knowledge industry. He says of Taylor that he brought a social structure we continue to unconsciously apply, and warns against the message that Taylorism implies: workers are interchangeable; workers only work hard enough to not be noticed; quality is an external responsibility
A project management tip
Fredrick Taylor was the first to study and quantify non-value work and put emphasis on eliminating wasteful and time consuming processes, procedures, and environmental impediments.



Buy them at any online book retailer!

The Fiduciary and the PM



Consider this explanation of a fiduciary:
In a fiduciary relationship, one person, in a position of vulnerability, justifiably vests confidence, good faith, reliance, and trust in another whose aid, advice or protection is sought in some matter.

In such a relation good conscience requires the fiduciary to act at all times for the sole benefit and interest of the one who trust
So, what are we to make of that?
Certainly, the project manager is, or should be, is vested with confidence, good faith, reliance, and trust. So, that makes the PM a fiduciary watching out for the vulnerable.

And, in a project situation, who is vulnerable?
  • The client or customer?
  • The sponsor?
  • Other project staff 
And, the PM is to hold all their interests in hand and find the best solution that optimizes interests for each of them? Good luck with that!

At some point, some ox is going to get gored. And then who blames the fiduciary? And to what risk is the fiduciary held?

The answer is: it's different in every project, depending on whether the client or sponsor is most supreme. And, of course, how does the PM get measured?
  • Business satisfaction re the project scorecard
  • Client satisfaction re business relationship
I think this why they pay the PM the big bucks!




Buy them at any online book retailer!

Thursday, December 27, 2018

LeCarre and project mangement



As a former intelligence professional, John LeCarre is one of my favorite authors, to say nothing of the dry British wit and sparkling prose that supports some quite challenging plots. Nonetheless, I didn't expect to find this wisdom on the pages of "Our kind of traitor"
In operational planning there are two  opportunities only for flexibility: One, when you've drawn up your plan. Two, when the plan goes belly up. Until it does, stick like glue to what you've decided, or you're ....


Buy them at any online book retailer!

Monday, December 24, 2018

Agile and the V-and-V thing



Have you thought much about this? Two of the conceptual conundrums of the hybrid methodology project are:
  1. How do you verify that which is incomplete and
  2. How do you validate the efficacy of that which is yet to be conceived?
Verification and validation (V-and-V) are traditionally held to be very important project practises that are difficult to map directly into the Agile domain. Traditionally, V-and-V has these practises:
  • Validation: Each requirement is validated for it’s business usefulness, in effect its efficacy toward project objectives. Validation is usually not later than the last step in gathering and organizing requirements
  • Verification: When development is complete, and when integration of all requirements are complete, the roll is called to ensure that every validated requirement is present and accounted for.
Validation
Placed into an Agile context, validation is applied both to the project backlog and to the iteration backlog, since changes are anticipated to occur.

Validation is typically first applied at the story or use case level, validating with conversation among the interested and sponsoring parties that the functionality proposed is valid for the purpose.
One can imagine validating against external rules and regulations, perhaps internal standards, and of course validating against the business case.

Verification
Verification is generally a practice at the iteration level, verifying that iteration backlog matches the iteration outcomes, and logging any differences
Depending on the project paradigm, V-and-V can be carried into integration tests and customer acceptance tests, again testing against various benchmarks and standards for validity, and verifying that everything delivered at the iteration level got integrated at the deliverable product level.


Buy them at any online book retailer!

Friday, December 21, 2018

Meeting the customer's standards



"They" say about Agile:
  • You don't have to bother with gathering requirements; requirements just emerge
  • You don't have to have any documentation; it's all in the code
  • You can do away with V&V: verification and validation, because that's like QA tacked onto the end
  • You don't really have to have an architect, because (somehow) the best architecture emerges
Taking responsibility for business critical performance
In my view, and what I tell my students: Nonsense, all of it! "They" have never tried to build something with OPM (other people's money) and been personally accountable for how the money is spent, what value is produced, and how the value/cost ratio was managed to the advantage of the business.  But even more important, "They" have never had to be responsible for business-critical performance.

Regulators -- helpful?
But to that add external regulators. Regulators don't give a flip about what "They" think. There had better be outcomes that can be audited back to the base level; there had better be documentation that supports claims; there had better be a way to do V&V before the "what did you know when and why didn't you know sooner" questions arrive via your local lawsuit.

In any regulated product market, like medical devices for instance that are built with a lot of software, the focus has to be on the joint satisfaction of the buyer/user and the regulator. Fortunately, both of these groups are on the "output" side of the project, which fits Agile quite well.

Where agile has a vulnerability is in the compliance part... unless compliance is built into the backlog, either as a framework or as explicit "stories". To not do so is to take a really unrealistic path to only temporary success... temporary until the regulators tear it apart.

Same comments apply for any number of regulated businesses, like banking, by the way, and back office areas like cash management and receivables where these things have to sustain audits, to say nothing of safety systems like certain critical avionics, ship controls, and industrial controls.

Oh, big data!
And, in this day and time: "big data". Ever tried to validate a data warehouse with tens of millions of records? The issue is simple; the solution is not. Reporting from a data warehouse is almost like "lying with statistics": you can find some data that fits almost any scenario, but is the context accurate; marriage of data with context is where the complexity (and information) lies. Doing data reports in Agile could be fool's errand if the "stories" are not carefully crafted.

Security intrusion avoidance anyone?




Buy them at any online book retailer!

Tuesday, December 18, 2018

Faces of risk



1
When you say "risk management" to most PMs, what jumps to mind is the quite orthodox conception of risk as the duality of an uncertain future event and the probability of that event happening.

Around these two ideas -- impact and frequency -- we've discussed in this blog and elsewhere the conventional management approaches. This conception is commonly called the "frequentist" view/definition of risk, depending as it does on the frequency of occurrence of a risk event. This is the conception presented in Chapter 11 of the PMBOK.

The big criticism of the frequentist approach -- particularly in project management -- is that too often there is no quantitative back-up or calibration for the probabilities -- an sometimes not for the impact either. This means the PM is just guessing. Sponsors push back and the risk register credibility is put asunder. If you're going to guess at probabilities, skip down to Bayes!

However.. (there's always a however it seems), there are three other conceptions of risk that are not frequentist in their foundation. Here are a few thoughts on each:

2
Failure Mode Event Analysis (FMEA): Common in many large scale and complex system projects and used widely in NASA and the US DoD.  FMEA focuses on how things fail, and seeks to thwart such failures, thus designing risk out of the environment. Failures are selected for their impact with essentially no regard for frequency. This is because most of the important failures occur so infrequently that statistics are meaningless. Example: run-flat tires. Another example: WMD countermeasures.

3
Bayes/Bayes theorem/Bayesians: Bayesians define risk as the gap between a present (or more properly 'a priori') estimate of an event and an observed outcome/value of the actual event (called more properly the posterior value).

There is no hint of frequentist in Bayes; it's simply about gaps -- what we think we know and what it turns out that we should have known. The big criticism -- by frequentists -- is about the 'a priori' estimate: it's often a guess, a 50/50 estimate get things rolling.

Bayes analysis can be quite powerful... it was first conceived in the 17th century by an English mathematician/preacher named Thomas Bayes. However, in WWII it came into its own; it became the basis for much of the theory behind antisubmarine warfare.

But, it can be a flop also: our 'a priori' may be so far off base that there is never a reasonable convergence of the gap no matter how long we observe, or how many observations we take.

4
Insufficient controllability, aka anonymous operations: the degree to which we have command of events. Software, particularly, and all anonymous systems generally are considered a "risk" because we lack absolute control. See also: control freak managers. See also the move: 2001: A Space Odyssey. Again, no conception of frequency.

Do you have a comment? Optional, of course.
John - Instructor


Buy them at any online book retailer!

Saturday, December 15, 2018

To self-organize, or not?




I like headlines that have a simple message

This one--No more self-organizing teams--caught my eye for three reasons:
Now, to be fair, Mike Cohn more or less supports the thesis we present here when he (Mike) quotes Philip Anderson who writes in "Biology of Business"

Self-organization does not mean that workers instead of managers engineer an organization design. It does not mean letting people do whatever they want to do. It means that management commits to guiding the evolution of behaviors that emerge from the interaction of independent agents instead of specifying in advance what effective behavior is. (1999, 120)

But, back to the headline: What did Mr. Highsmith tell us? (Of course, he said more than these bullets, but these are the highlights)
  • There is just too much experience and management literature that shows that good leaders make a big difference
  • There is a contingent within the agile community that is fundamentally anarchist at heart and it has latched onto the term self-organizing because it sounds better than anarchy. However, putting a duck suit on a chicken doesn’t make a chicken a duck.
  • Delegating decisions in an organization isn’t a simple task; it requires tremendous thought and some experimentation
  • Leading is hard. If it was easy, every company would be “great,” to use Jim Collins’ term (Good to Great ).

What did he not tell us?
  • Dominance is a human trait not easily set aside; thus the natural leaders will come to the fore and the natural followers will fall-in thankfully. There's no need and no practical way to rotate the leadership once dominance is established
  • Like it or not, positional authority counts for something in all but the smallest enterprises. Thus, senior managers are senior for a reason. It's hard to establish credibility with the stakeholdes that hold the key to resources if the team is being led from the bottom of the pecking order.
  • Self-organization may deny biases and bully the nemesis off the team. Group think, anyone?
  • Delegation is a tricky matter: do only those things that only you can do
And the answer is: according to Highsmith, something called "light touch", but in reality it means leading and managing from a position of trusting the team, but mentoring the "self-organization" towards a better day.



Buy them at any online book retailer!

Wednesday, December 12, 2018

Behavior v Outcome



You have to give Jurgen Appelo high marks for imaginative illustrations that catch the eye and convey the thought. He says this is one of his best illustrations ever; he may be right. He calls it his "celebration grid". I imagine Jurgen will be telling us a lot more about this if it catches on.





But what he says about these grid points is the more important take away:

  • “Celebrate failure” is nonsense, because you shouldn’t celebrate failure that comes from mistakes (the red part). What you should celebrate is learning, and repeating good practices (the green parts).
  • Pay-for-performance tends to drive people away from experiments, toward the safe practices on the right, with little learning as a result. 
  • Hierarchies are good at exploiting opportunities, and endlessly repeating the same practices; but they learn very little. 
  • Networks are good at exploring new opportunities, and failing half of the time, but they’re not good at efficiently repeating practices.
File this under leadership for learning and motivation, change management, managing team dynamics, and carrot and stick.


Buy them at any online book retailer!

Sunday, December 9, 2018

Failure or faults?



Does software fail, or does it just have faults, or neither?
Silly questions? Not really. I've heard them for years.

Here 's the argument for "software doesn't fail":
Software always works the way it is designed to work, even if designed incorrectly. It doesn't wear out, break (unless you count corrupted files), or otherwise not perform exactly as designed. To wit: it never fails

Here's the argument for "it never fails, but has faults":
Faults refer to functionality or performance incorrectly specified such that the software is not "fit for use". Thus in the quality sense of "fit for use" it has faults.

I don't see an argument for "neither", but perhaps there is one.

However, Peter Ladkin is not buying any of this. In his blog at "the Abnormal Distribution", he has an essay, a small part of which is here:

What’s odder about the views of my correspondent is that, while believing “software cannot fail“, he claims software can have faults.

To those of us used to the standard engineering conception of a fault as the cause of a failure, this seems completely uninterpretable: if software can’t fail, then ipso facto it can’t have faults.
Furthermore, if you think software can be faulty, but that it can’t fail, then when you want to talk about software reliability, that is, the ability of software to execute conformant to its intended purpose, you somehow have to connect “fault” with that notion of reliability.

And that can’t be done.


Buy them at any online book retailer!