Pages

Friday, October 31, 2014

Robots for PM


Should we laugh?
 

Robot to Dilbert: "I have come to micromanage you.

But only until I replace you with a robot and turn you into furniture"

Dilbert to Boss: "On the plus side, he has a plan and communicates well"
 
Dilbert by Scott Adams


Bookmark this on Delicious
Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, October 29, 2014

Requirements entropy framework


This one may not be for everyone. From the Winter 2014 "System Engineering" we find a paper about "... a requirements entropy framework (REF) for measuring requirements trends and estimating engineering effort during system development."

Recall from your study of thermodynamics that the 2nd Law is about entropy, a measure of disorder. And Lord knows, projects have entropy, to say nothing of requirements!

The main idea behind entropy is that there is disorder in all systems, natural and otherwise, and there is a degree of residual disorder  that can't be zero'd out. Thus, all capacity can never be used; a trend can never be perfect; an outcome will always have a bit of noise -- the challenge is to get as close to 100% as possible. This insight is credited to Bell Labs scientist Claude Shannon

Now in the project business, we've been in the disorder business a long time. Testers are constantly looking at the residual disorder in systems: velocity of trouble reports; degrees of severity; etc

And, requirements people the same way: velocity and nature of changes to backlog, etc.

One always hopes the trend line is favorable and the system entropy is going down.

So, back to the requirements framework. Our system engineering brethren are out to put a formal trend line to the messiness of stabilizing requirements.

Here's the abstract to the paper for those that are interested. The complete paper is behind a pay wall:
ABSTRACT
This paper introduces a requirements entropy framework (REF) for measuring requirements trends and estimating engineering effort during system development.

The REF treats the requirements engineering process as an open system in which the total number of requirements R transition from initial states of high requirements entropy HR, disorder and uncertainty toward the desired end state of inline image as R increase in quality.

The cumulative requirements quality Q reflects the meaning of the requirements information in the context of the SE problem.

The distribution of R among N discrete quality levels is determined by the number of quality attributes accumulated by R at any given time in the process. The number of possibilities P reflects the uncertainty of the requirements information relative to inline image. The HR is measured or estimated using R, N and P by extending principles of information theory and statistical mechanics to the requirements engineering process.

The requirements information I increases as HR and uncertainty decrease, and ΔI is the additional information necessary to achieve the desired state from the perspective of the receiver. The HR may increase, decrease or remain steady depending on the degree to which additions, deletions and revisions impact the distribution of R among the quality levels.

Current requirements volatility metrics generally treat additions, deletions and revisions the same and simply measure the quantity of these changes over time. The REF measures the quantity of requirements changes over time, distinguishes between their positive and negative effects in terms of inline image, and ΔI, and forecasts when a specified desired state of requirements quality will be reached, enabling more accurate assessment of the status and progress of the engineering effort.

Results from random variable simulations suggest the REF is an improved leading indicator of requirements trends that can be readily combined with current methods. The additional engineering effort ΔE needed to transition R from their current state to the desired state can also be estimated. Simulation results are compared with measured engineering effort data for Department of Defense programs, and the results suggest the REF is a promising new method for estimating engineering effort for a wide range of system development programs


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, October 27, 2014

The architecture thing...


So, we occasionally get silliness from serious people:

" ......  many enterprise architects spend a great deal of their time creating blueprints and plans based on frameworks.  The problem is,  this activity rarely leads to anything of genuine business value because blueprints or plans are necessarily:

Incomplete – they are high-level abstractions that omit messy, ground-level details. Put another way, they are maps that should not be confused with the territory.

Static – they are based on snapshots of an organization and its IT infrastructure at a point in time.

Even roadmaps that are intended to describe how organisations’ processes and technologies will evolve in time are, in reality, extrapolations from information available at a point in time.
The straightforward way to address this is to shun big architectures upfront and embrace an iterative and incremental approach instead"


That passage is dubious, as any real architect knows, though not all wrong, to be sure:
  • Yes, frameworks are often more distraction than value-add; personally, I don't go for them
  • Yes, if your blueprints are pointing to something of no business value, then if that is really true, change them or start over... simple common sense ... but let it said: you can describe business value on a blueprint!
  • No, high level abstractions actually are often quite useful, starting with the narrative or epoch story or vision, all of which are forms of architecture, all of which are useful and informative. It's called getting a view of the forest before examining trees.
  • Yes, abstractions hide detail, but so what? The white box can be added later
  • Yes, roadmaps obsolesce. Yes, they have to be kept up to date; yes, sometimes you start on the road to nowhere. So what? If it doesn't work, change it. 
I think the agile principle "... the best architecture emerges ..." which is silliness writ large is the influence. We should put that aside, permanently. Why: because many small scale architectures simply don't scale.

Take, as just one example a physical story board or a Kanban board of sticky notes in a room somewhere. That architecture works well for a half dozen people. Now, try to scale that for half a hundred... it really doesn't scale. The technology doesn't scale.. you need an electronic database to support 50 people; the idea of all independent stories doesn't scale unless you add structure, communications, protocols, etc, all of which are missing or unneeded at small scale.

To the rescue: Here's another recent passage from another serious person who has a better grip:

One conjecture we arrived at is that architects  typically work on three distinct but interdependent structures:

  1. The Architecture (A) of the system under design, development, or refinement, what we have called the traditional system or software architecture.
  2. The Structure (S) of the organization: teams, partners, subcontractors, and others.
  3. The Production infrastructure (P) used to develop and deploy the system, especially important in contexts where the development and operations are combined and the system is deployed more or less continuously.

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, October 24, 2014

Security vs Liberty


In my change management classes we debate and discuss this issue:
How do you work with a team that has a low tolerance for high change or uncertainty?

Of course, you can imagine the answers that come back:
  • Frame all actions with process and plans
  • Provide detailed instructions
  • Rollout change with a lot of lead time, and support it with clearly understandable justification
What's rarely -- almost never mentioned -- is lead through the uncertainty. Is it odd that PMs think first of plans and process before leadership would come to mind? My advice:
  • For those in the high avoidance culture, they have certain expectations of their leaders, starting with the establishment and maintenance of order, safety, and fairness.
  • Insofar as change is required, even radical change can be accepted and tolerated so long as it comes with firm and confident leadership.
  • Lots of problems can be tolerated so long as there is transparency, low corruption, and a sense of fair play. In other words, the little guy gets a fair shake.

Now comes a provocative op-ed from a German journalist based in Berlin with essentially this message (disclosure: I lived and worked in Berlin so I have an unabated curiosity about my former home base):
"To create and grow an enterprise like Amazon or Uber takes a certain libertarian cowboy mind-set that ignores obstacles and rules.

Silicon Valley fears neither fines nor political reprimand. It invests millions in lobbying in Brussels and Berlin, but since it finds the democratic political process too slow, it keeps following its own rules in the meantime. .....

It is this anarchical spirit that makes Germans so neurotic [about American technology impacts on society]. On one hand, we’d love to be more like that: more daring, more aggressive. On the other hand, the force of anarchy makes Germans (and many other Europeans) shudder, and rightfully so. It’s a challenge to our deeply ingrained faith in the state.
Certainly, the German view of American business practices is the antithesis of following the central plan. To me, this is not all that unfamiliar since resistance to central planning, state oversight, and the admiration for the "cowboy spirit" of individualism is culturally mainstream in the U.S., less so in the social democracies.

In the U.S. if you ask what is the primary purpose of "the State" -- meaning: the central authority -- whether it's Washington or the PMO -- the answer will invariably be "protection of liberty" (See: the Liberty bell; "give me Liberty or give me death" motto; the inalienable right to pursue Liberty, et al)

Ask the same question of a social democrat and you get "protection and security". With two devastating world wars in the space of 30 years that wiped out the most part of all economies except the U.S. and imparted almost unspeakable population losses, except in the U.S., how could it be otherwise?

Security vs Liberty: a fundamental difference in the role of central authority. Of course, all central authority provides both, and the balance shifts with circumstances. In the U.S. during the mid-19th century civil war and during WW II, and then 9/11, security came to the front and liberty took a back seat.

Now, port all this philosophy to a project context, and in the software world, no surprise: Agile!

Certainly the most libertarian of all methodologies. And, agile comes with a sustained challenge to the traditional, top-down, centrally planned, monitored, and controlled methodologies that grew out of WWII.

And, agile even challenges the defined process control methods that grew out of the post WW II drive for sustainable, repeatable quality.

Did some say: high change or uncertainty?


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, October 20, 2014

The statistics of failure



The Ebola crises raises the issue of the statistics of failure. Suppose your project is to design the protocols for safe treatment of patients by health workers, or to design the haz-mat suits they wear -- what failure rate would you accept, and what would be your assumptions?

In my latest book, "Managing Project Value" (the green cover photo below), in Chapter 5: Judgment and Decision-making as Value Drivers, I take up the conjunctive and disjunctive risks in complex systems. Here's how I define these $10 words for project management:
  • Conjunctive: equivalent to AND. The risk everything will not work
  • Disjunctive: equivalent to OR. The risk that at least something will fail
Here's how to think about this:
  • Conjunctive: the risk that everything will work is way lower than the risk that any one thing will work.
    Example: 25 things have to work for success; each has a 99.9 chance of working (1 failure per thousand). The chance that all 25 will work simultaneously (assuming they all operate independently): 0.999^25, or 0.975 (25 failures per thousand)
  •  Disjunctive: the risk that at least one thing will fail is way more than the risk that any one thing will fail.
    Example: 25 things have to work for success; each has 1 chance in a thousand of failing, 0.001. The chance that there will be at least one failure among all 25 is 0.024, or 24 chances in a thousand.*
So, whether you come at it conjunctively or disjunctively, you get about the same answer: Complex systems are way more vulnerable than any one of their parts. So... to get really good system reliability, you have to nearly perfect with every component.

Introduce the human factor

So, now we come to the juncture of humans and systems. Suffice to say humans don't work to a 3-9's reliability. Thus, we need security in depth. If an operator blows through one safe guard, there's another one to catch it.

John Villasenor has a very thoughtful post (and, no math!) on this very point: "Statistics Lessons: Why blaming health care workers who get Ebola is wrong". His point: hey, it isn't all going to work all the time! Didn't we know that? We should, of course.

Dr Villasenor writes:
... blaming health workers who contract Ebola sidesteps the statistical elephant in the room: The protocol ... appears not to recognize the probabilities involved as the number of contacts between health workers and Ebola patients continues to grow.

This is because if you do something once that has a very low probability of a very negative consequence, your risks of harm are low. But if you repeat that activity many times, the laws of probability ... will eventually catch up with you.

And, Villasenor writes in another related posting about what lessons we can learn about critical infrastructure security. He posits:
  • We're way out balance on how much information we collect and who can possibly use it effectively; indeed, the information overload may damage decision making
  • Moving directly to blame the human element often takes latent system issues off the table
  • Infrastructure vulnerabilities arise from accidents as well as premeditated threats
  • The human element is vitally important to making complex systems work properly
  • Complex systems can fail when the assumptions of users and designers are mismatched
That last one screams for the imbedded user during development --


*For those interested in the details, this issue is governed by the binominal distribution which tells us how to select or evaluate one or more events among many events. You can do a binominal on a spreadsheet with the binominal formula relatively easily.


Bookmark this on Delicious

Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, October 17, 2014

What's success in the PM biz?


Here's an Infographic with a Standish-like message:
  • The majority votes for input (cost, schedule)
  • Output, where the only business-useable value is gets a shorter straw
Why does the voting to this way? Probably a result of PM incentives and measurements: success only comes from controlling inputs. Re success, definition of: No, heck no! Strong message follows.

As I write in my book (cover below) about Maximizing Project Value, there's cost, schedule, and there's value. They are not the same.
  • The former is given by the business to the project; the latter is generated by the business applying the project's outcomes.
  • Cost is always monetized; the value may be or may not be.
  • Schedule is often a surrogate for cost, but not always; sometimes, there is business value with schedule (first to market, etc) and sometimes not. Thus, paying attention to schedule is usually a better bet than fixing on cost.
  • Value may be "mission accomplished" if in the public sector; indeed, cost may not really have value: Mission at any price!
"Let [everyone] know ... that we shall pay any price, bear any burden, meet any hardship, support any friend, oppose any foe, in order to assure the survival and the success of liberty." JFK, January, 1961

In the private sector, it may be mission, but often it's something more tangible: operating efficiency, product or service, or R&D. What's the success value on R&D... pretty indirect much of the time. See: IBM and Microsoft internal R&D
Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, October 13, 2014

Ask a SME... or Ask a SME?


It seems like the PM blog sphere talks constantly of estimates. Just check out #noestimates in Twitter-land. You won't find much of substance among thousands of tweets (I refrain from saying twits)

Some say estimates are not for developers: Nonsense! If you ask a SME for an estimate, you've done the wrong thing. But, if you ask a SME for a range of possibilities, immediately you've got focus on an issue... any single point estimate may be way off -- or not -- but focusing on the possibilities may bring all sorts of things to the surface: constraints, politics, biases, and perhaps an idea to deep-six the object and start over with a better idea.

How will you know if you don't ask?

Some say estimates are only for the managers with money: Nonsense re the word "only". I've already put SMEs in the frame. The money guys need some kind of estimate for their narrative. They aren't going to just throw money in the wind and hope (which isn't a plan, we all have been told) for something to come out. Estimates help frame objectives, cost, and value.

To estimate a range, three points are needed. Here's my three point estimate paradox:
We all know we must estimate with three points (numbers)... so we do it, reluctantly
None of us actually want to work with (do arithmetic with) or work to (be accountable to) the three points we estimate

In a word, three point estimates suck -- not convenient, thus often put aside even if estimated -- and most of all: who among us can do arithmetic with three point estimates?

One nice thing about 3-points and ranges, et al, is that when applied to a simulation, like the venerable Monte Carlo, a lot washes out. A few big tails here and there are no real consequence to the point of the simulation, which is find the central value of the project. Even if you're looking for a worst case, a few big tails don't drive a lot.

But, here's another paradox:
We all want accurate estimates backed up by data
But data -- good or bad -- may not be the driver for accurate estimates

Does this paradox let SMEs off the hook? After all, if not data, then what? And, from whom/where/when?

Bent Flyvbjerg tells us -- with appropriate reference to Tversky and Kahneman -- we need a reference class because without it we are subject to cognitive and political maladies:
Psychological and political explanations better account for inaccurate forecasts.
Psychological explanations account for inaccuracy in terms of optimism bias; that is, a cognitive predisposition found with most people to judge future events in a more positive light than is warranted by actual experience.
Political explanations, on the other hand, explain inaccuracy in terms of strategic misrepresentation.

So that's it! A conspiracy of bad cognition and politics is what is wrong with estimates. Well, just that alone is probably nonsense as well.

Folks: common sense tells us estimates are just that: not facts, but information that may become facts at some future date. Estimates are some parts data, some parts politics, some parts subjective instinct, and some parts unknown. But in the end, estimates have their usefulness and influence with the SMEs and the money guys.

You can't do a project without them!



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, October 10, 2014

If I flip a coin, the expected outcome is ...


It's likely that every project manager somewhere along the way has been taught the facts about a flip of a fair coin as an introduction to "statistics for project management"

Thus, we all understand that a fair coin has a 50-50 chance of heads or tails; that the expected value of the outcome -- outcome weighted by frequency -- is 50% heads or 50% tails. Less well understood is that sequences like HHHHHHH or TTTTTT can occur, even in a fair coin toss. Lest we be alarmed, the coin sequence eventually returns 50% heads ... just stick with it

Even less understood is that what I just wrote is largely inapplicable to project management. Not because we don't flip a lot of coins most days, but because the coin toss explanation is all about "memoryless" systems with protocols (rules) that are invariant to management intervention.

Shocking as it may seem, the coin simply does not remember the last toss. So, the rules of chance, even after HHHHHHH or TTTTTT only tell us that the next flip is 50-50 chance of heads or tails. But, of course, if this sequence were some project outcome, we'd be all over it! No HHHHHHH or TTTTTT.

In our world, for starters: we remember! And, we get in and mix it up, to wit: we intervene! No coins for us, by God!

Consequently: the rules of chance for memoryless events are pretty much inapplicable in project management.

So, does this make all statistical concepts inapplicable, or is there something to be known and appreciated, better yet: applied to project activity?

Of course, you know the answer: Of course there are valuable and applicable statistical concepts. Let's take this list for a "101" course in "I hate statistics for Project Managers"
  • Central tendency: random stuff tends to gather about a central value. This gives rise to the ideas of average, expected value, grading on the curve, the bell curve, and the all important "regression to the mean".  The latter is useful when assessing your team performance: an above average performance is just as likely to be followed by a below average performance.  
  • Samples can be just as valid as having all the information. So, if you can't afford to test everything, measure everything, gather everything in a pile, etc, just take a sample... the results are more affordable and can be just as valid
  • All you need for a simulation is some three point estimates. Another benefit of central tendency is that the Monte Carlo simulation is quite valid even if you know nothing at all about how outcomes are distributed, just so long as you can get a handle on some three point estimates. And, even the two points on the tails need not be too worrisome... a lot washes out in the simulation results, all a gift of central tendency.
  • Ask me now, ask me later: Whatever estimates you come with now, they will change as time passes... risk estimates are not generally "stationary" in time. And, usually, the estimates migrate from optimistic to pessimistic. So, it only gets worse! (Keep your options dry)
  • Expected value is outcome weighted by frequency. It's just a form of average, with frequency taken into effect.
  • Prospect theory tells us we overweight pessimism and underweight optimism. And, even more subjectively, we all have different ideas about the weighting depending on how much we already have in the game. Where you stand depends on where you sit! Take note: this is pretty damn reliable; you can take it to the bank.
If you're tagged with putting together a risk register, put the last three on a sticky note and stare at it constantly.




Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, October 8, 2014

There are no facts about the future


A favorite quote:
There are no facts about the future...
Dr David Hulett
PMBOK Chapter 11 Leader

And, so you might ask: What are facts about?
And, of course, I would answer facts are what we can observe, measure, sense, and conclude about the present, and the same could have been about the past.

And that leaves the future fact free ... where, by the way, a good deal of project activity will happen.
OMG! And, there are no facts out there!

Which brings me to a neat list of maladies (aka, uncertainties) -- all of which can apply to the future -- said list put together by Glen Alleman recently (Glen is prone to big words that drive me to my dictionary, so I paraphrase):
  • Statistical uncertainty - repeatable random events or outcomes, the range of which is best handled by buffers or margin in the spec, or some other way to immunize the project for outliers.
  • Subjective judgment - bias in your thinking, anchoring yourself to something you know or have been told, and adjustment to the least difficult or easiest retrieved or nearest solution; these all best understood by reading the stuff written by Amos Tversky and Daniel Kahneman|
  • Systematic error - unwitting or misunderstood departures or biases -- usually repeated similarly in similar situations -- from an acknowledged expert solution, reference model, or benchmark
  • Incomplete knowledge - You may know what you don't know, or you may not know what you don't know. This is famously attributed to US Defense Secretary Don Rumsfeld. Fortunately, this lack of knowledge can be improved with effort. Sometimes, you have an epiphany; sometimes the answer falls in your lap; sometimes you can systematically learn what you don't know.
  • Temporal variation - or better yet: Not stationary. To be "not stationary" means there is a sensitivity in the unit (system) under test -- either time or location -- to when and/or where you make an observation, measurement, etc, or there is instability in the observed and measured system
  • Inherent stochasticity (irregular, random, or unpredictable) - instability or random effects between and within system elements, some intended, and some not intended or even predicted. If the instability is quite disproportional to the stimulus, we call it a chaotic response.
Looking at this list, the really swell news for the "I hate, hate, hate statistics" crowd is that for most project managers on most projects, statistics play a relatively small role in the overall panoply of uncertainty and risk.

Probability -- that is, frequency -- is a bit more prominent in the PM domain because, when associated with impact, you get (yikes!) a statistic, to wit: expected value.

Well, not really. In projects the probability estimate is subject to all the maladies we just went over -- there are rarely any facts about probabilities -- so what we get is something discovered in the 18th century: expected utility (usefulness) value -- that is, the more or less subjective version of expected value.

And, here's more news you can use: expected utility value is not stationary! Ask me now, I'll give you one estimate; ask me much later, you'll get another estimate. Why? Because my underlying risk attitude (perception) is not stationary...

It's time to end this!....




Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Monday, October 6, 2014

Fixed price contracts for agile


In my book, Project Management the Agile Way, I make this statement in the chapter on contracts:

Firm Fixed Price (FFP) completion contracts are inappropriate for contracting agile.

I got an email from a reader challenging that assertion, to which  responded:

I always start by asserting that agile is a "fixed price" methodology. But, there is a big difference between a contract for your best effort at a fixed price, and a fixed price contract for working product, to wit: completion (agile manifesto objective)

There's no problem whatsoever in conveying best effort at fixed price through a contract mechanism; it's quite another thing to convey fixed price for a working product.

Agile is a methodology that honors the  plan-driven case for strategic intent and business value; but agile is also a methodology that is tactically changeable -- thus emergent in character -- re interpretation of the plan.


Using the definition that strategic intent is the intended discriminating difference to be attained in the "far" future that has business value, a project can be chartered to develop the product drivers for that discriminating difference. Thus,  think of agile as iterative tactically, and plan-driven strategically. The sponsor has control of the strategy; the customer has control of many of tactics.

Re FFP specifically, I was aiming my arguments primarily at the public sector community, and particularly the US federal acquisition protocols. The public sector -- federal or otherwise -- usually goes to a great deal of trouble to carefully prescribe contract relationships, and the means to monitor and control scope, cost, and schedule.

In particular, in the federal sector, only the "contracting officer" -- who is a legal person usually -- has the authority to accept a change in the written description of scope, a change in the cost obligated, or a change in the delivery schedule and location. The CO ordinarily has one or more official "representatives" (CORs) or technical representatives (COTRS) that are empowered to "interpret" changes, etc,

In the commercial business domain, the concepts of contract protocols are usually much more relaxed, starting with the whole concept of a CO and COR -- many businesses simply don't have a CO at all... just an executive that is empowered to sign a PO or a contract. Thus, there are many flexibilities afforded in the private sector not available to public sector

When I say "fixed price", in effect I am saying "not cost reimbursable". Cost reimbursable is quite common in science and technology contracts in the public sector, but almost never in the "IT" sector, public or private. So, I find that many IT execs have little understanding why you might write a contract for a contractor to take your money and not pledge completion.


Working from the perspective of "not cost reimbursable",  I make the point about a FFP completion contract as distinct from other forms of fixed price arrangements, like best effort. In my opinion, agile is not an appropriate methodology for a completion contract in the way in which I use the term, to wit: Pass me the money and I will "complete the work" you describe in the contract when we sign the deal.

However,  there are FP alternatives for a traditional completion effort, the best of the lot in my opinion being a FP framework within which each iteration being a separate and negotiated fixed scope and fixed price job order wherein the job order backlog is planned case by case.

However, even in such a JO arrangement, the customer is "not allowed" to trade or manage the backlog in such a way that the business case for the strategic value is compromised. The project narrative must be "stationary" (invariant to time or location of observation); although the JO nuts and bolts can be emergent.


Typically if the agile principle of persistent team structure is being followed, where the team metrics for throughput (velocity x time) are benchmarked, then the cost of a JO is almost the same every time -- plus or minus a SME or special tool -- and thus the "price" of the contract is "fixed" simply by limiting the number of JOs what will fit within the cost ceiling.
 


There are other factors which are vexing in the public sector in a FP contract arrangement:
1. Agile promotes a shift in allegiance from the specification being dominant to the customer needs/wants/priorities being dominate. Try telling a CO you are not going to honor the spec as the first precedence!

2. Following on from a shift in allegiance, what then is the contractual definition of "done". Is the project done when the money runs out (best effort); when the backlog is exhausted (all requirements satisfied); or the customer simply says "I've got what I want"? This debate drives the COs nuts.

3. How does a COTR verify and validate (V and V)? In the federal sector, V and V is almost a religion. But, what's to be validated? Typically, verify means everything that is supposed to be delivered got delivered; validated means it meets the quality standard of fitness for use. If the scope is continuously variable, what's to be verified? What do you tell the CO?

4. Can the "grand bargain" be contracted? I suggest a "grand bargain" between sponsor and PM (with customer's needs in the frame) wherein for a fixed investment and usually a fixed time frame the PM is charged with delivering the best value possible.

Best value is defined as the maximum scope (feature/function/performance) possible that conforms to the customer's urgency/need/want as determined iteratively (somewhat on the fly). Thus requirements are allowed to be driven (dependent) by customer's direction of urgency/need/want and available cost and schedule (independent).


Where the customer doesn't usually get a vote is on the non-functional requirements, especially those required to maintain certifications (like SEI level or ISO), compliance to certain regulations (particularly in safety, or some finance (SOx)), or certain internal standards (engineering or architecture)



Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Friday, October 3, 2014

Information Age Office Jockey


A recent essay starts this way:
We all know what makes for good character in soldiers. We’ve seen the movies about heroes who display courage, loyalty and coolness under fire. But what about somebody who sits in front of a keyboard all day? Is it possible to display and cultivate character if you are just an information age office jockey, alone with a memo or your computer?

And so, the conclusion of the essayist is: Yes! (always start with the good news). Indeed, we are pointed to the 2007 book, “Intellectual Virtues,” Robert C. Roberts of Baylor University and W. Jay Wood of Wheaton College, which lists some of the cerebral virtues.

Their table of content suggests the following:
  • Love of knowledge
  • Firmness
  • Courage and caution
  • Humility
  • Autonomy
  • Generosity
  • Practical Wisdom
One thing not in the table of content but certainly an element of character is taking responsibility for one's actions. This is emphasized in agile methods, indeed, in all project methods, but perhaps not enough in our everyday culture. Wood and Roberts give us this formula as credited to John Greco:


Would that there be more of us that subscribe to Greco!
(Re big words: canonically: relating to a general rule, protocol, or orthodoxy)


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog

Wednesday, October 1, 2014

Red rover, red rover, send ..... over!


It's an old game but it's a way to choose a team:
  • Everyone stands around waiting to be chosen
  • The team leader chants: "Red rover, red rover, send { name } over!", and that person is chosen
Ooops! It's hell to be the last chosen... or worse, not chosen at all! Gives you a headache (take aspirin) and sometimes a heartache (Rejected!)

So, Carolyn O'Hara has some advice:
Do:
  1. Check your own behavior and biases for tendencies that might make people feel excluded
  2. Empower others — it makes them feel trusted and included
  3. Continually work at creating an inclusive culture — it’s an ongoing process
Don’t:
  1. Gloss over differences — people want their unique contributions to be valued
  2. Assume diversity is the same as inclusion
  3. Leave it to chance — be proactive about promoting inclusion
  4. Gloss over differences — people want their unique contributions to be valued

Catalyst Research Center for Advancing Leader Effectiveness recently completed a survey of 1,500 workers in six countries that showed people feel included when they “simultaneously feel that they both belong, but also that they are unique..” So, no Taylorism here: no one is "plug and play"; everyone has their unique utility.

Of course, while you're busy being inclusive, be aware that you might also need to be tolerant. Tolerance and inclusion are actually different ideas ... you can be tolerant without bothering to include; and you can be inclusive while being intolerant to those included.

It's a bit tricky for some, but for a productive team, you really should try for both qualities.


Read in the library at Square Peg Consulting about these books I've written
Buy them at any online book retailer!
http://www.sqpegconsulting.com
Read my contribution to the Flashblog