Monday, December 9, 2024

Management v Engineer


A balloonist was lost. 
He descended to just 30' above the ground where he spotted a lady below
"Can you tell me where I am?" he called out.
She responded: "Your altitude is 30' above ground; your latitude is 28.538 north; your longitude is 81.378 west" 

The balloonist said: "You must be an engineer!"
She responded: "Yes, how did you know?"
He said: "You have given me facts, but no information that is useful for me. You haven't helped me at all, and I'm still lost!"
She responded: "You must be in management!"
He said: "Yes, how did you know?"
She responded: "You have all the facts, but still you say you're lost and can not help yourself. You have positioned yourself above me, and me below you, only now you say it's my fault!"
Anonymous




Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, December 6, 2024

Being in the workforce



I cannot express to you how much you need to become all of these if you want to be employed / employable going forward.
  • Thinker / Creator
  • Product Manager
  • Developer (AI Augmented, of course)
  • Sales / Marketing expert (AI Augmented, of course)
These are all merging. It’s all one thing.
It’s just called being in the workforce.
You have to be able to come up with ideas, implement them, and then get them to market. Full stop.
This is the work side of Human 3.0.

Daniel Miessler



Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, December 2, 2024

Emergent methodologies



The influence of A.I. on project methodologies and process steps is yet to be defined, practised widely, and understood in the context of what most of think of when we think of project methods and processes, skills, risks, and predictable and planned outcomes .

And so, we are told to brace for emergent outcomes, not entirely predictable at the outset. And we are told to adapt our leadership and management skills to this reality (or reality in-waiting)

With that all in mind, I asked chatGPT to opine on emergent methods. Here is what it gave me back:

In the context of project management, "emergence" or "emergent methods" refers to the phenomenon where complex and unpredictable outcomes arise from the interactions and dynamics of various elements within a project. Instead of relying solely on predetermined plans and rigid processes, emergent methods embrace adaptability, flexibility, and the ability to respond to changing circumstances.

Traditionally, project management has followed a linear or sequential approach, where all tasks and dependencies are predetermined in a detailed project plan. However, this approach can be limiting when faced with uncertain and dynamic environments. Emergent methods offer an alternative perspective by acknowledging that projects are often complex, and the future is uncertain.

In emergent methods, project managers adopt a more iterative and adaptive approach, focusing on continuous learning and improvement throughout the project lifecycle. Rather than trying to predict and control every aspect of the project from the outset, emergent methods encourage exploration, experimentation, and the ability to adjust the project's direction based on real-time feedback and insights.

Key characteristics of emergent methods in project management include:

Iterative Planning: Instead of creating a rigid and detailed plan at the beginning of the project, emergent methods emphasize iterative planning. This involves developing a broad roadmap and refining it in smaller increments as the project progresses and new information becomes available.

Continuous Learning and Adaptation: Emergent methods embrace the concept of learning as you go. Project teams gather feedback and data throughout the project's execution, allowing them to adapt their strategies and make informed decisions based on evolving conditions and requirements.

Flexibility and Agility: Emergent methods prioritize flexibility and agility to respond to changing circumstances. This may involve adjusting project scope, reallocating resources, or modifying project objectives based on emerging needs or unforeseen challenges.

Empowered Teams and Collaboration: Emergent methods emphasize empowering project teams to make decisions and encouraging collaboration. By promoting open communication and fostering a culture of trust, teams can respond effectively to emerging issues and capitalize on opportunities.

Embracing Uncertainty: Unlike traditional project management approaches that attempt to eliminate uncertainty, emergent methods recognize that uncertainty is inherent in complex projects. Instead of resisting it, these methods embrace uncertainty and focus on building resilience and adaptive capacity.

Emergent Leadership: In emergent methods, project managers adopt a leadership style that is more facilitative and empowering. They create an environment where team members can contribute their expertise and collectively make decisions to address emergent challenges.

By adopting emergent methods, project managers can better navigate complex and unpredictable projects, promote innovation and creativity, and increase the chances of project success in dynamic environments. However, it's important to note that emergent methods are not suitable for all projects and may require a shift in mindset and organizational culture to be effectively implemented.







Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, November 27, 2024

Sinking or sunk?



Let's keep the basics in front of us:
  • Projects are transformative processes packaged together
  • Inputs (cash, people, buildings and tools, overhead like training) are transformed into deliverables that don't remotely look and feel like the inputs
  • Deliverables are much more valuable than the sum of the inputs
Too often, the focus of the 'business' is on the inputs being consumed, like cash-flow and resources consumed, whereas the more experienced among us keep an eye on input/output efficiency.

And what do we mean by I/O efficiency?
We mean how well input consumption corresponds to its planned value, and how well the corresponding outputs conform to (planned) expectations, when each is sampled--measured--observed in the same time period.

What about sinking and sunk?
Once the project grabs input and consumes it, that input is "sunk", and can't be changed or refunded
Most of us are familiar with the first law of 'sunk' resources: 'Don't use the sunk resource to make a decision about a sinking project". 

Those focused on the sunk resources are focused on inputs rather than outcomes; are focused on the rearview mirror rather than the windshield, and may not understand the opportunity for adjustments.

That is: the future of your project--if it has one--should stand on its own merits re how resources will be used in the future, not so much how they were used in the past.

Why this first law?
Because at the moment you are challenged--even a self-challenge--to justify your future by citing the past, that is the time to root cause analyze the efficiencies. Depending on the analysis, you will have an opportunity to make decisions to alter the likely future efficiencies, and you have the opportunity to 're-baseline'

The future may not "wash-rinse-repeat" what has already been sunk.




Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, November 22, 2024

System Integrator -- Owner Rep roles



In the government domain, the government often contracts for an SI -- system integrator -- whose scope of work is to be an independent evaluator of program plans and progress, an expert adviser to the program executive for risk management and value engineering, and a voice in the project office not beholden to the prime contractor(s), system architect(s), or other implementers. 

In large programs, the SI may work simultaneously with multiple prime contractors, overseeing their coordination, communications, consistency in approach, integration of scope, and guarding for "white space" gaps. The SI may even evaluate the integrated program for 'chaos' ... the unintended outcomes of an integrated 'whole'. 

In some limited situations, the SI may even develop an interface that seems tagged to white space.

In the commercial domain, a similar scope and role is often given to an "owner's representative"

Necessary or Nice to Have?
Your first thought may be: Another scope of work .... do I really need this for project success? If I don't engage with a service provider for this scope, is this something I am going to have to learn how to do myself, and then allocate my resources to the task? Or, can I get by without it?

Quick answer: It's work that has to be done ... to some level of scope ... so either the PEO or PMO does it with in-house resources, or the PEO/PMO engages an SI who is expert in the scope and presumably not learning on the job on your nickel.

And, by the way, if you do engage with an independent SI, then cooperation with the SI on the part of your architect, prime contractor, and perhaps other stakeholders has to be made part of the Statement of Work (SoW) with those parties. Question worth asking: Does that cooperation come at a cost, monetized or functional?

What's the ROI on the SI engagement
So, whether you are a government program office or a business unit with a large capital project, what's the value-add of having an SI or owner's rep on the scene? Is there a monetized ROI to the cost of a SI, or is the advantage with a DIY model (do it yourself)?  

In many respects, it's the insurance model: High impact with low probability, to take a square from the risk matrix. Thus, a low expected value, but nonetheless the impact is judged unaffordable. 

The usual risk management doctrine is this: You've got a big (big!) project with a lot of moving parts (different contractors doing different stuff). Get yourself an SI! (At a cost which is usually a small multiple of the expected value, if you think of it in terms of insurance)

SI Scope
The SI is on alert for these 'black swan' impacts that could derail the program, extend the schedule, impact performance, or cost big bucks for rework. 

The SI comes on the job early, typically from Day-1, working down the project definition side of the "V" chart (see chart below)

The SI is an advisor to the PEO or PMO regarding threats to the cost, scope, schedule, or quality. If there are value engineering proposals to be fit into the program, the SI is usually the first to evaluate and advise about their applicability.

The SI is an independent evaluator ("red team") of specifications, looking for inconsistencies, white space gaps, sequencing and dependency errors, and metric inconsistencies

The SI is an independent technical reviewer for the PMO of the progress toward technical and functional performance. The SI may provide much of the data for earned-value analysis.

The SI can be an independent trouble-shooter, but mostly the SI is looking for inappropriate application of tools, evaluation of root cause, and effectiveness of testing.

The SI may be an independent integrator of disparate parts that may require some custom connectivity. This is particularly the case when addressing a portfolio. The SI may be assigned the role of pulling disparate projects together with custom connectors.

The SI may be independent integration tester and evaluator, typically moving up the "V" from verification to validation

In a tough situation, the SI may be your new best friend!
What about agile?
'Agile-and-system-engineering' is always posed as a question. My answer is: "of course, every project is a system of some kind and needs a system engineering treatment". More on this here and here.

And, by extension, large scale agile projects can benefit from an SI, though the pre-planned specification review role may be less dominate, and other inspections, especially the red team role for releases, will be more dominate.

V-model
Need to review the "V-model"? Here's the image; check out the explanation here.


Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, November 16, 2024

When value is assymetrical



I've written a couple of books on project value; you can see the book covers at the end of this blog.
One of my themes in these books is a version of cybernetics:
Projects are transformative of disparate inputs into something of greater value. More than a transfer function, projects fundamentally alter the collective value of resources in a cybernetics way: the value of the output is all but undiscernible from an examination of inputs

But this posting is about asymmetry. Asymmetry is a different idea than cybernetics

"Value" is highly asymmetrical in many instances, without engaging cybernetics. One example cited by Steven Pinker is this:

Your refrigerator needs repair. $500 is the estimate. You groan with despair, but you pay the bill and the refrigerator is restored. But would you take $500 in cash in lieu of refrigeration? I don't know anyone who would value $500 in cash over doing without refrigeration for a $500 repair.

Of course there is the 'availability' bias that is also value asymmetrical:

"One in hand is worth two in the bush"

And there is the time displacement asymmetry:

The time-value of money; present value is often more attractive than a larger future value. The difference between them is the discount for future risk and deferred utility.
Let's not forget there is the "utility" of value:
$5 is worth much less to a person with $100 in their pocket than it is to a person with only $10

How valuable?
So when someone asks you "how valuable is your project", your answer is ...... ?

 




Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, November 12, 2024

ISO 42001 AI Management Systems



Late in 2023 ISO published ISO 42001-2023 "Information technology Artificial intelligence Management System"

To quote ISO:
ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations.

It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.

For project offices and project managers, there are some points that bear directly on project objectives:

  • The standard addresses the unique challenges AI poses, which may need to be in your project's requirements deck, such as properties or functionality that addresses ethical considerations, transparency, and continuous learning. 
  • For organizations and projects, the standard sets out a structured way to manage risks and opportunities associated with AI, balancing innovation with governance.
Learn More
Of course, with something like this, to learn more about this you need not go further than the ISO website (more here) for relevant PDFs and FAQs. But, of course, you can also find myriad training seminars, which for a price, will give you more detail.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, November 8, 2024

Risk on the black diamond slope



If you snow ski, you understand the risk of a Black Diamond run: it's a moniker or label for a path that is  risk all the way, and you take it (for the challenge? the thrill? the war story bragging rights?) even though there may be a lesser risk on another way down.

So it is in projects sometimes: In my experience, a lot of projects operate more or less on the edge of risk, with no real plan beyond common sense and a bit of past experience to muddle through if things go wrong.

Problematic, as a process, but to paraphrase the late Donald Rumsfeld: 
You do the project with the resources and plan you have, not the resources or plan you want
You may want a robust risk plan, but you may not have the resources to research it and put it together.
You may not have the resources for a second opinion
You may not have the resources to maintain the plan. 
And, you may not have the resources to act upon the mitigation tactics that might be in the plan.

Oh, woe is me!

Well, you probably do what almost every other PM has done since we moved past cottage industries: You live with it and work consequences when they happen. Obviously, this approach is not in any RM textbook, briefing, or consulting pitch. But it's reality for a lot of PMs.

Too much at stake
Of course, if there is safety at stake for users and developers, as there is in many construction projects; and if there is really significant OPM invested that is 'bet the business' in scope; and if there are consequences so significant for an error moved into production that lives and livelihoods are at stake, then the RM plan has to move to the 'must have'.  

A plan with no action
And then we have this phenomenon: You actually do invest in a RM plan; you actually do train for risk avoidance; and then you actually do nothing during the project. I see this all the time in construction projects where risk avoidance is clearly known; the tools are present; and the whole thing is ignored.

Show me the math
But then of course because risk is an uncertainty, subject to the vagaries of Random Numbers and with their attendant distributions and statistics, there are these problems:
  • It's easy to lie, or mislead, with 'averages' and more broadly with a raft of other statistics. See: How to Lie with Statistics (many authors) 
  • Bayes is a more practical way for one-off projects to approach uncertainty than frequency-of-occurrence methods that require big data sets for valid statistics, but few PM really understand the power of Bayes. 
  • Coincidence, correlation, and causation: Few understand one from the other; and for that very reason, many can be led by the few to the wrong fork in the road. Don't believe in coincidence? Then, of course, there must be a correlation or causation!
The upshot?
Risk, but no plan.
Or plan, and no action


Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, November 4, 2024

The people are told .....




In the beginning, "people" are told: "It's too soon to know where we are in this project"

After the beginning, "people" are told: "It's too late to stop the project; there's too much sunk; we have to keep going"



Sampling the data
And so the bane of big projects comes down to poor sampling technique: 
Either the early details are not predictive because the early "efficiencies" of cost per unit of value earned have too little history to be useful as a long-term predictor; or you've accepted the first idea for too long, thereby failing to update efficiency predictions until the late details are too late to pull the plug on a bad bet.

Sunk cost decisions:
It's easy to write this, and far less easy to execute, but never make a decision about the future based on the sunk cost of the past. You can't do anything about recovering the actual expenditure, but you do have free will -- politics aside -- regarding more spending or not. 

History has value
On the other had, sunk cost has a history, and if you are good at what you do, you will use that history to inform your decisions about the opportunity of the future




Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, October 23, 2024

New Grads for your Project



If you recruit new grads for your project, I wonder if your experience comports with this report from "Intelligent"? The gist seems to be this:
  • Headline: "1 in 6 Companies Are Hesitant To Hire Recent College Graduates"
  • 75% of companies report that some or all of the recent college graduates they hired this year were unsatisfactory
  • Hiring managers say recent college grads are unprepared for the workforce, can’t handle the workload, and are unprofessional
I'm thinking there may be more troubles in established firms than in more "flexible" start-ups, but here's Intelligent's observation:
“Many recent college graduates may struggle with entering the workforce for the first time as it can be a huge contrast from what they are used to throughout their education journey. They are often unprepared for a less structured environment, workplace cultural dynamics, and the expectation of autonomous work. Although they may have some theoretical knowledge from college, they often lack the practical, real-world experience and soft skills required to succeed in the work environment. These factors, combined with the expectations of seasoned workers, can create challenges for both recent grads and the companies they work for,” says Intelligent’s Chief Education and Career Development Advisor, Huy Nguyen.
Wow! I hope this not true in your industries, but it may be distressingly common.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, October 14, 2024

Quantitative Methods: It takes a number!

Numbers are a PM's best friend
Is this news?
I hope not; I wrote the book Quantitative Methods in Project Management some years ago. (Still a good seller)

So, here's a bit of information you can use:
Real numbers: (**)
  • Useful for day-to-day project management
  • 'Real numbers' are what you count with, measure with, budget with, and schedule with.You can do all manner of arithmetic with them, just as you learned in elementary school.
  • Real numbers are continuous, meaning every number in between is also a real number
  • Real numbers can be plotted on a line, and there is no limit to how long the line can extend, so a real number can be a decimal of infinite length
  • Real numbers are both rational (a ratio of two numbers) or irrational (like 'pi', not a ratio of two numbers)

Random numbers

  • Essential for risk management subject to random effects
  • Not a number exactly, but a number probably
  • Random numbers underlie all of probability and statistics, and thus are key to risk management
  • Random numbers are not a point on a line -- like 2.0 -- but rather a range on a line like 'from 1.7 to 2.3'
  • The 'distribution' of the random number describes the probability that the actual value is more likely 1.7 than 1.9, etc
  • Mathematically, distributions are expressed in functional form, as for example the value of Y is a consequence of the value of X.
  • Arithmetic can not be done with random numbers per se, but arithmetic can be done on the functions that represent random numbers. This is very complex business, and is usually best done by simulation rather than an a direct calculus on the distributions. 
  • Monte Carlo tools have made random numbers practical in project management risk evaluations.

Rational numbers:

  • A number that is a ratio of two numbers
  • In project management, ratios are tricky: both the numerator and the denominator can move about, but if you are looking only at the ratio, like a percentage, you may not have visibility into what is actually moving.
 Irrational numbers
  • A number that is not a ratio, and thus is likely to have an infinite number of digits, like 'pi'
  • Mostly these show up in science and engineering, and so less likely in the project office
  • Many 'constants' in mathematics are irrational .... they just are what they are
 Ordinal numbers
  • A number that expresses position, like 1st or 2nd
  • You can not do arithmetic with ordinal numbers: No one would try to add 1st and 2nd place to get 3rd place
  • Ordinal numbers show up in risk management a lot. Instead of 'red' 'yellow' 'green' designations or ranks for risk ranking, often a ordinal rank like 1, 5, 10 are used to rank risks. BUT, such are really labels, where 1 = green etc. You can not do arithmetic on 1,5,10 labels no more than you can add red + green. At best 1, 5, 10 are ordinal; they are not continuous like real numbers, so arithmetic is disallowed.
Cardinal numbers and cardinality
  • Cardinality refers to the number of units in a container. If a set, or box, or a team contains 10 units, it is said it's cardinality is 10. 
  • Cardinal numbers are the integers (whole numbers) used to express cardinality
  • In project management, you could think of a team with a cardinality of 5, meaning 5 full-time equivalents (whole number equivalent of members)
 
Exponents and exponential performance
  • All real numbers have an exponent. If the exponent is '0', then the value is '1'. Example: 3exp0 = 1
  • An exponent tells us how many times a number is multiplied by itself: 2exp3 means: 2x2x2 (*)
  • In the project office, exponential growth is often encountered. Famously, the number of communication paths between N communicators (team members) is approximately Nexp2. Thus, as you add team members, you add communications exponentially such that some say: "adding team members actually detracts from productivity and throughput!"
 Vectors
  • Got a graphics project? You may have vector graphics in your project solution
  • Vectors are numbers with more than one constituent; in effect a vector is a set of numbers or parameters
  • Example: [20mph, North] is a two-dimensional vector describing magnitude (speed) and direction
  • In vector graphics, the 'vector' has the starting point and the ending point of an image component, like a line, curve, box, color, or even text. There are no pixels ... so the image can scale (enlarge) without the blurriness of pixels.
 ----------------------------
(*) It gets tricky, but exponents can be decimal, like 2.2. How do you multiply a number by itself 2.2 times? It can be done, but you have to use logarithms which work by adding exponents.
 
(**) This begs the question: are there 'un-real' numbers? Yes, there are, but mathematicians call them 'imaginary numbers'. When a number is imaginary, it is denoted with an 'i', as 5i. These are useful for handling vexing problems like the square root of a negative number, because iexp2 = -1; thus i = square root of -1. 



Like this blog? You'll like my books also! Buy them at any online book retailer!

Thursday, October 10, 2024

Bayes Thinking Part II



In Part I of this series, we developed the idea that Thomas Bayes was a rebel in his time, looking at probability problems in a different light, specifically from the proposition of dependencies between probabilistic events.

In Part I we posed the project situation of 'A' and 'B', where 'A' is a probabilistic event--in our example 'A' is the weather--and 'B' is another probabilistic event, the results of tests. We hypothesized that 'B' had a dependency on 'A', but not the other way 'round.

Bayes' Grid

The Figure below is a Bayes' Grid for this situation. 'A+' is good weather, and 'B+' is a good test result. 'A' is independent of 'B', but 'B' has dependencies on 'A'. The notation, 'B+ | A' means a good test result given any conditions of the weather, whereas 'B+ | A+' [shown in another figure] means a good test result given the condition of good weather. 'B+ and A+'  means a good test result when at the same time the weather is good. Note the former is a dependency and the latter is a intersection of two conditions; they are not the same.

  
The blue cells all contain probabilities; some will be from empirical observations, and others will be calculated to fill in the blanks. The dark blue cells are 'unions' of specific conditions of 'A' and 'B'. The light blue cells are probabilities of either 'A' or 'B'.

Grid Math

There are a few basic math rules that govern Bayes' Grid.
  • The dark blue space [4 cells] is every condition of 'A' and 'B', so the numbers in this 'space' must sum 1.0, representing the total 'A' and 'B' union
  • The light blue row just under the 'A' is every condition of 'A', so this row must sum to 1.0
  • The light blue column just adjacent to 'B' is every condition of 'B' so this column must sum to 1.0
  • The dark blue columns or rows must sum to their light blue counter parts
Now, we are not going to guess or rely on a hunch to fill out this grid. Only empirical observations and calculations based on those observations will be used.

Empirical Data

First, let's say the empirical observations of the weather are that 60% of the time it is good and 40% of the time it is bad. Going forward, using the empirical observations, we can say that our 'confidence' of good weather is 60%-or-less. We can begin to fill in the grid, as shown below.


In spite of the intersections of A and B shown on the grid, it's very rare for the project to observe them. More commonly, observations are made of conditional results.  Suppose we observe that given good weather, 90% of the test results are good. This is a conditional statement of the form P(B+ | A+) which is read: "probability of B+ given the condition of A+".  Now, the situation of 'B+ | A+' per se is not shown on the grid.  What is shown is 'B+ and A+'.  However, our friend Bayes gave us this equation:
P(B+ | A+) * P(A+) = P (B+ and A+)  = 0.9 * 0.6 = 0.54


Take note: B+ is not 90%; in fact, we don't know yet what B+ is.  However, we know the value of 'B+ and A+' is 0.54 because of Bayes' equation given above.

Now, since the grid has to add in every direction, we also know that the second number in the A+ column is 0.06, P(B- and A+).

However, we can go no farther until we obtain another independent emprical observation.
 
To be continued

In the next posting in this series, we will examine how the project risk manager uses the rest of the grid to estimate other conditional situations.

Share this article with your network by clicking on the link.
Delicious
 


Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, October 7, 2024

Bayes thinking, Part I





Our friend Bayes, Thomas Bayes, late of the 18th century, an Englishman, was a mathematician and a pastor who's curiosity led him to ponder the nature of random events.

There was already a body of knowledge about probabilities by his time, so curious Bayes went at probability in a different way. Until Bayes came along, probability was a matter of frequency:
"How many times did an event happen/how many times could an event happen". In other words, "actual/opportunity".

To apply this definition in practice, certain, or "calibrated", information is needed about the opportunity, and of course actual outcomes are needed, often several trials of actual outcomes.

Bayes' Insight
Recognizing the practicalities of obtaining the requisite information, brother Bayes decided, more or less, to look backward from actual observations to ascertain and understand conditions that influenced the actual outcomes, and might influence future outcomes.

So Bayes developed his own definition of probability that is not frequency and trials oriented, but it does require an actual observation. Bayes’ definition of probability, somewhat paraphrased, is that probability is...
The ratio of expected value before an event happens to the actual observed value at the time the event happens.

This way of looking at probability is really a bet on an outcome based on [mostly subjective] evaluations of circumstances that might lead to that outcome. It's a ratio of values, rather than a frequency ratio.

Bayes' Theorem
He developed a widely known explanation of his ideas [first published after his death] that have become known as Bayes' Theorem. Used quantitatively [rather qualitatively as Bayes himself reasoned], Bayesian reasoning begins with an observation, hypothesis, or "guess" and works backward through a set of mathematical functions to arrive at the underlying probabilities.

To use his theorem, information about two probabilistic events is needed:

One event, call it 'A', must be independent of outcomes, but otherwise has some influence over outcomes. For example, 'A' could be the weather. The weather seems to go its own way most of the time. Specifically 'good weather' is the event 'A+', and 'bad weather' is the event 'A-'. 

The second event, call it 'B', is hypothesized to have some dependency on 'A'. [This is Bayes' 'bet' on the future value] For example, project test results in some cases could be weather dependent. Specifically, 'B+' is the event 'good test result' and 'B-' is a bad test result;  test results could depend on the weather, but not the other way 'round.

Project Questions
Now situation we have described raises some interesting questions:
  • What is the likelihood of B+, given A+? 
  • What are the prospects for B+ if A+ doesn't happen? 
  • Is there a way to estimate the likelihood of B+ or B- given any condition of A? 
  • Can we validate that B indeed depends on A?

Bayes' Grid
Curious Bayes [or those who came after him] realized that a "Bayes' Grid", a 2x2 matrix, could help sort out functional relationships between the 'A' space and the 'B' space. Bayes' Grid is a device that simplifies the reasoning, provides a visualization of the relationships, and avoids dealing directly with equations of probabilities.

Since there's a lot detail behind Bayes' Grid, we'll take up those details in Part II of this series.

Photo credit: Wikipedia

Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, September 25, 2024

The SpaceX approach


In the September 2-15, 2024 edition of Aviation Week and Space Technology there is an article about the 5-step process at SpaceX for getting the most effective project outcome. In a few words summarized here, the steps are:
  1. Challenge the Requirements. Interpret this as: your requirements are dumb; find a way to make them less dumb!
  2. Find a way to eliminate a process step of poor value, or a part or component that can be simplified
  3. Find a way to make it easier, faster, cheaper to reproduce or manufacture
  4. Prioritize speed and responsiveness in everything you do or will have done.
  5. Automate everything! Take people out of the manufacturing and production process whereover possible.


Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, September 21, 2024

Einstein's methodology



"When I have one week to solve a seemingly impossible problem, I spend six days defining it, and then the solution becomes obvious."

Albert Einstein



Like this blog? You'll like my books also! Buy them at any online book retailer!

Wednesday, September 18, 2024

The ideal number of project workers is ....


One theory of project staffing is that the ideal number of project people is ZERO.
Don't believe it?
The objective of the project is to deliver something to the business in a timeframe and a budget.

At present, it takes people to do that. But ideally, the number of people would be vanishing small if they were super optimized, super talented, and only touched a process when it needed an initiation or some other budget.

Not buying this?
Well, at the business level, the same applies. 
Take a read .... 5 min or so... of this essay which makes the point, and then the counter point for the future of 9-5 jobs generally. 

From the essay (actually, the essay ends on an upbeat, so don't take the following as the only worthy content):
The ideal number of employees in any company is zero. If a company could run and make money using no people, then that is exactly what it should do.

Nobody owes anybody a job. Literally the only reason anyone has one is because there was a problem at some point in that business that required a human to do some part of the work. Building on that, if that ever becomes not the case, for a particular person or team or department of human employees, the natural next action is to get rid of them.

and just now —starting in 2023 and 2024, it is actually becoming possible to replace human intelligence tasks with technology.


Like this blog? You'll like my books also! Buy them at any online book retailer!

Sunday, September 15, 2024

A.I. Risk Repository


MIT may have done us a favor by putting together a compendium of risks associated with A.I. systems.
Named the "A.I. Risk Repository", there are presently 700 or so risks categorized in 23 frameworks by domain and cause, organized as a taxonomy for each of these characteristics.

The Causal taxonomy addresses 'how, when, and why' of risks.
The Domain taxonomy goes into 7 domains and 23 subdomains, so certainly some fine grain there. 

YouTube, of course
This is a public resource, so naturally there's YouTube on what it's all about and how to use it.

There's a lot of stuff
If you go the link, given in the first paragraph, and scroll down a bit, you will be invited to wade into the database, working your way through the taxonomies. There's just a lot of stuff there, so give it a look.   



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, September 10, 2024

Slow is smooth; Smooth is fast


The blog title is actually from the mantra of the U.S. Navy SEALS
Slow is smooth; smooth is fast.

U.S. Navy SEALS

Now, this bit of wisdom may strike you as similar to the project tips we've been working with for years, to wit: "quality is free", and "it's cheaper and faster to do it right the first time" which recognizes the cost and schedule penalty of rework.

It's about rhythm and balance

From the SEALS website, we learn: "This phrase isn't just about being slow or fast; it's about finding a rhythm that balances precision and pace, ultimately leading to swifter progress. The SEALs swear by it... but how can we apply it beyond military contexts?

More depth:

Of course, there's a YouTube on "Smooth and Fast"

On the website, link given in the first sentence, there is a long-form article on the concept. Two chapters stand out:

Applying "Slow is Smooth, Smooth is Fast" Beyond Military Contexts

Incorporating the Mantra into Business Practices 
Using the Mantra for Project Management

The Role of "Slow is Smooth, Smooth is Fast" in Team Dynamics

Promoting Smoothness in Team Operations
The Mantra's Impact on Team Efficiency

In the PM domain, the recommendations are: 

  • Be deliberate; take the time to consider and prepare
  • Quality trumps speed (the cost of rework is embedded in this one)
  • Keep refining (Sort of a Bayesian idea, not so much continuous improvement)
In team dynamics, few errors and quality outcomes build trust, confidence, and high morale.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, September 6, 2024

Stability! It counts for a lot



Stability!
It counts for a lot.
It implies -- for behaviors and management decisions -- predictability, reliability, under-control (but not risk-free, of course), coherent narrative, steady-state goals, and a strategy that is understandable to those who have the job of implementing it.

Perhaps you are aware, as many are, that stability requires feedback to effect error correction and trap excesses and blind alleys. 
Ah yes!
We know about feedback.
Open loop systems -- those with outcome but no feedback -- are prone to many uncontrolled and unexpected responses. Who can predict what a stimulus will do to a system that has no feedback? Actually, that's a really tricky task.

So, what about feedback? 
What's to know?
  • Timing is everything! Getting the feedback "phased" in time such that it has a correcting effect rather than a destructive effect is vital. The former is generally called "negative feedback" for its corrective nature; the latter is generally called "positive feedback" for its reinforcing rather than corrective nature. And, when its too late, it's generally called ineffective.

  • Amplitude, or strength, or quantity is next: It has to be enough, but not too much. Tricky that! Experimentation and experience are about the only way to handle this one.
What could possibly go wrong?
Actually, a lot can go wrong.

No feedback at all is the worst of the worst: the 'system' is 'open loop', meaning that there are outcomes that perhaps no one (or no thing) are paying attention to. Stuff happens, or is happening, and who knows (or who knew)?

Timing errors are perhaps the next worst errors: if the timing is off, the feedback could be 'positive' rather than 'negative' such that the 'bad stuff' is reinforced rather than damped down. 

Strength errors are usually less onerous: if the strength is off, but the timing is on, then the damping may be too little, but usually you get some favorable effect

Practical project management
Feedback for correcting human performance is familiar to all. Too late and it's ineffective; too much over the top and it's taken the wrong way. So, timing and strength are key

But, the next thing is communication: both verbal and written (email,etc). Closing the loop provides reassurance of the quality and effectiveness of communication. You're just not talking or writing into the wind!

And, of course, in system or process design, loops should never be open. Who knows what could happen.

I should mention:
The study of feedback systems generally falls within what is called 'cybernetics'. As described by sciencedirect.com, MIT mathematician Norbert Wiener defined cybernetics as “the study of control and communication in the animal and the machine." 

From Wikipedia, we learn: The core concept of cybernetics is circular causality or feedback—where the observed outcomes of actions are taken as inputs [ie, feedback] for further action in ways that support the pursuit and maintenance of particular conditions [ie, 'ways that support' requires the correct timing and strength]



Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, September 2, 2024

The least Maximum schedule


To minimize your maximum schedule is a good thing. Or, it should be.
Here's how to do it:
  • Subordinate all other priorities to the most important tasks. This begs the question: is there an objective measure of importance, and from whom or what does such a measure emanate?
  • If you can measure 'importance' (see above) then do the densest tasks first, as measured by the ratio of importance to time.

    Note: a short time (denominator) will "densify" a task, so some judgement is required so that a whole bunch of short tasks don't overwhelm the larger picture. In the large picture, you would hope that the density is driven by the numerator (importance)

  • Always do an 'earliest start', putting all the slack at the end. You may not need it, but if you do it will be there.
  • Move constraints around to optimize the opportunity for an earliest start that leads to least maximum. See my posting on this strategy.

  • If a new task drops into the middle of your schedule unannounced, prioritize according to 'density' (See above). This may mean dropping what you are doing and picking up the new task. Some judgement required, of course. It's not just a bot following an algorithm here. 

  • If some of your schedule drivers have some random components, and you have to estimate the next event with no information other than history, then "LaPlace's Law of Succession" may be helpful, to wit:
    • To the prior random (independent) outcomes (probability) observed, add "1" to the numerator and "2" to the denominator to predict the probability of the next event. (*)

      So, by example, if your history is that you observed, measured, or obtained a particular outcome independently 3 of 4 times (3/4), LaPlace's Law would predict (3+1)/(4+2) as the probability for the next similar outcome, or 4/6. This figure is a bit more pessimistic, as you would expect by giving extra weight to the number of trials (denominator).
_________________________

(*) (n+1)/(d+2) isn't just a guess, or throwing a dart at a board. It is a rigorous outcome of an algebraic limit to a long string of 1's and 0's with historic probability of n/d. Although LaPlace did the heavy lifting, Bayes gets the popular credit for the idea of using prior observations as the driver for new estimates with a modified hypothesis.


Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, August 27, 2024

Never a dull moment ..;.



For some, boredom is the great fear. Got to keep moving!
"He had a function, an excuse for activity. For a few hours at least he wouldn’t be bored. ... he drank the coffee, which was still too hot. He reflected that the fear of boredom had driven him the whole of his life."
Ann Cleeves, Novelist

The fear or boredom was a driver ...
Frankly, I know how he feels

Add value
It shouldn't be motion for motion's sake
It should be about the utility of what you are doing
I need an activity plan for every day ... how will this day add value to what I am about?

About utility
Utility is the marginal difference between face value and the value you -- or someone else -- puts on what your are doing or offering. 

If you think about it, almost anyone can offer up face value if they have the skills for that domain, but if you are in constant motion -- avoiding boredom -- then that activity should be directed at more than just face value.

Even if it's just reading a book, the question is: how much better off are you for having engaged in that activity? For me, I read a lot of history because I think there are lessons there to be applied forward that will add value to my endeavors. And, of course, I might avoid a risk I might not otherwise understand.

If you are driven to activity ...
Make it count for something.




Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, August 23, 2024

Leonardo's Lament



"The supreme misfortune is when theory outstrips performance"
Leonardo da Vinci

And then there's this: 

During the technical and political debates in the mid-1930's by the FCC with various engineers, consultants, and business leaders regarding the effect, or not, of sunspots on various frequency bands being considered for the fledgling FM broadcast industry, the FCC's 'sunspot' expert theorized all manner of problems.

But Edwin Armstrong, largely credited with the invention of FM as we know it today, disagreed strongly, citing all manner of empirical and practical experimentation and test operations, to say nothing of calculation errors and erroneous assumptions shown to be in the 'theory' of the FCC's expert.

But, to no avail; the FCC backed its expert.

Ten years later, after myriad sunspot eruptions, there was this exchange: 

Armstrong: "You were wrong?!"

FCC Expert: "Oh certainly. I think that can happen frequently to people who make predictions on the basis of partial information. It happens every day"



++++++++++
Quotations are from the book "The Network"
 


Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, August 19, 2024

Out of Sight Activity


Back in yesteryear, I recall the first time I had a management job big enough that my team was too large for line-of-sight from my desk and location.

Momentary panic: "What are they doing? How will I know if they are doing anything? What if I get asked what are they doing? How will I answer any of these questions?"

Epiphany: What I thought were important metrics now become less important; outcomes rise to the top
  • Activity becomes not too important. Where and when they worked could be delegated locally
  • Methods are still somewhat important because Quality (in the large sense) is buried in Methods. So, can't let methods be delegated willy nilly
  • Outcomes now become the biggie: are we getting results according to expectations?
There's that word: "Expectations"
In any enterprise large enough to not have line-of-sight to everyone, there are going to be lots of 'distant' managers, executives, investors, and customers who have 'expectations'. And, they have the money! So, you don't get a free ride on making up your own expectations (if you ever did)

At the End of the Day
  • I had 800 on my team
  • 400 of them were in overseas locations
  • 400 of them were in multiple US locations
  • I had multiple offices
  • It all worked out: we made money!





Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, August 2, 2024

Do LLMs reason or think?


In a posting on "Eight to Late", the question is posed: Do large language models think, or are they just a communications tool?

The really short answer from Eight to Late is "no, LLMs don't think". No surprise there. I would imagine everyone has that general opinion.

However, if you want a more cerebral reasoning, here is the concluding paragraph:
Based, as they are, on a representative corpus of human language, LLMs mimic how humans communicate their thinking, not how humans think. Yes, they can do useful things, even amazing things, but my guess is that these will turn out to have explanations other than intelligence and / or reasoning. For example, in this paper, Ben Prystawksi and his colleagues conclude that “we can expect Chain of Thought reasoning to help when a model is tasked with making inferences that span different topics or concepts that do not co-occur often in its training data, but can be connected through topics or concepts that do.” This is very different from human reasoning which is a) embodied, and thus uses data that is tightly coupled – i.e., relevant to the problem at hand and b) uses the power of abstraction (e.g. theoretical models).



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, July 30, 2024

Data rule #1



The first rule of data:
  • Don't ask for data if you don't know what you are going to do with it
Or, said another way (same rule)
  • Don't ask for data which you can not use or act upon
 And, your reaction might be: Of course!

But, alas, in the PMO there are too many incidents of reports, data accumulation, measurements, etc which are PMO doctrine, but in reality, there actually is no plan for what to do with it. Sometimes, it's just curiosity; sometimes it's just blind compliance with a data regulation; sometimes it's just to have a justification for an analyst job.

The test:
 If someone says they need data, the first questions are: 
  • What are you going to do with the data?
  • How does the data add value to what is to be done
  • Is the data quality consistent with the intended use or application (**), and 
  • Is there a plan to effectuate that value-add (in other words, can you put the data into action)?
And how much data?
Does the data inquisitor have a notion of data limits: What is enough, but not too much, to be statistically significant (*), informative for management decision making, and sufficient to establish control limits?


Like this blog? You'll like my books also! Buy them at any online book retailer!

Saturday, July 27, 2024

Is it alright to guess in statistics?



Is guessing in statistics like crying in baseball? It's something "big people" don't do.
Or is it alright to guess about statistics? 
The Bayesians among us think so; the frequency guys think not. 

Here's thought experiment: I postulate that there are two probabilities influencing yet a third. To do that, I assumed a probability for "A" and I assumed a probability for "B", both of which jointly influence "C". But, I gave no evidence that either of these assumptions was "calibrated" by prior experience.

I just guessed
What if I just guessed about "A" and "B" without any calibrated evidence to back up my guess? What if my guess was off the mark? What if I was wrong about each of the two probabilities? 
Answer: Being wrong about my guess would throw off all the subsequent analysis for "C".

Guessing is what drives a lot of analysts to apoplexy -- "statisticians don't guess! Statistics are data, not guesses."
Actually, guessing -- wrong or otherwise -- sets up the opportunity to guess again, and be less wrong, or closer to correct.  With the evidence from initial trials that I guessed incorrectly, I can go back and rerun the trials with "A" and "B" using "adjusted" assumptions or better guesses.

Oh, that's Bayes!
Guessing to get started, and then adjusting the "guess" based on evidence so that the analysis or forecast can be run again with better insight is the essence of Bayesian methodology for handling probabilities.
 
And, what should that first guess be?
  • If it's a green field -- no experience, no history -- then guess 50/50, 1 chance in 2, a flip of the coin
  • Else: use your experience and history to guess other than 1 chance in 2
According to conditions
Of course, there's a bit more to Bayes' methodology: the good Dr Bayes -- in the 18th century -- was actually interested in probabilities conditioned on other probable circumstances, context, or events. His insight was: 
  • There is "X" and there is "Y", but "X" in the presence of "Y" may influence outcomes differently. 
  • In order to get started, one has to make an initial guesses in the form of a hypothesis about not only the probabilistic performance of "X" and "Y", but also about the the influence of "Y" on "X"
  • Then the hypothesis is tested by observing outcomes, all according to the parameters one guessed, and 
  • Finally, follow-up with adjustments until the probabilities better fit the observed variations. 
Always think Bayesian!
  • To get off the dime, make an assumption, and test it against observations
  • Adjust, correct, and move on!



Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, July 23, 2024

Enterprise-quality browser for the PMO


The trusty internet browser that has been around since the Netscape days of the 1990's has largely been a lay person's user interface to the internet and sundry consumer internet apps accessed via the browser.

Fair enough
Let's stipulate: In the last 30 years that browser user experience has improved dramatically, to be sure.

Something different
But in recent years, and especially accelerating in 2024, the "enterprise-quality" browser had made inroads in the enterprise business world. New browser companies (*)  have formed and are addressing the heightened security needs of the enterprise as well as a myriad of other needs (see below). This opportunity is not lost on the traditional guys from Microsoft, Apple, and Google; they also have their versions of an enterprise browser. (*)

The general requirements set is this:
  • The need for an easier and less complicated way to integrate business apps into the browser. 
  • More of a "windows" (small 'w') look with multiple app windows in a common display, decidedly different from a row (or column) of tabs.
  • Security protections that are more demanding in the enterprise setting.
  • Network, IT, and data protection functions built-in 
PMO effects
So in the PMO you may see new browsers and some of your favorite apps, like Office, database engines, scheduling and costing apps, statistical apps, and others that are somewhat "bolt-on" apps to the consumer browsers (Chrome, Edge, and Safari) become a more integrated app set on the enterprise browser. 

_________
(*) Island and Here, formerly OpenFin, but also "Edge for Business" from Microsoft and "Chrome Enterprise" from Google


Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, July 19, 2024

Clearing the backlog



Yikes! My backlog is blocked! How can this be? We're agile... or maybe we've become de-agiled. Can that happen?

Ah yes, we're agile, but perhaps not everything in the portfolio is agile; indeed, perhaps not everything in the project is agile.

In the event, coupling is the culprit.

Coupling? 
Coupling is system engineering speak for transferring one effect onto another, or causing an effect by some process or outcome elsewhere. The coupling can be loose or tight.
  • Loose coupling: there is some effect transference, but not a lot. Think of double-pane windows decoupling the exterior environment from the interior
  • Tight coupling: there is almost complete transference of one effect onto another. Think of how a cyclist puts (couples) energy into moving the chain; almost no energy is lost flexing the frame.
In the PM domain, it's coupling of dependencies: we tend to think of strong or weak corresponding roughly to tight or loose.
 
Managing coupling is a task in risk management because coupling may introduce unwanted risks in the project or the product.

If coupling is a problem, how to solve it?
If coupling is a benefit, how to obtain it?
First, there's buffers to loosen coupling
The buffer -- if large enough -- absorbs the effect. For an excellent treatment of buffers in the PM domain, see Goldratt's  book "Critical Chain Method" for more about decoupling with buffers

Second, there are coupling objects
  • To avoid coupling, buffers may not do the trick.
  • But to enable coupling, we need some connectivity
In either case, think of objects, temporary or permanent, that can effect the coupling. A common example is seam in one fabric joining to another. The seam forms a "rip-stop" which prevents a ripping all down the fabric. 
 
One system that uses such a rip-stop is the sails on a boat: rip-stops are sewn into the sail fabric to prevent a total failure in the event of damage in one section, and thereby to decouple the damage from one section to another.
 
Now, move that idea from a sail to a backlog, using object interfaces for isolating one backlog to another (agile-on-agile), or from the agile backlog to structured requirements (agile-on-traditional).

With loose coupling, we get the window pane effect: stuff can go on in "Environment A" without strongly influencing "Environment B". 
Some caution advised: this is sort of a "us vs them" approach, some might say stove piping.

The case for tight coupling
Obviously then, there are some risks with loose coupling in the architecture that bear against the opportunity to keep the backlog moving, to wit: we want to maintain pretty tight coupling on communication among project teams while at the same time we loosen the coupling between their deliverables.

There are two approaches:
  • Invent a temporary object to be a surrogate or stand-in for the partner project/process/object. In other words, we 'stub out' the effect into a temporary effect absorber.
  • Invent a service object (like a window pane) to provide the 'services' to get from one environment to another.
Of course, you might recognize the second approach as a middle layer, or the service operating system of a service-oriented-architecture (SOA), or just an active interface that does transformation and processing (coupling) from one object/process to another.

With all this, you might see the advantages of an architect on the agile team!




Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, July 15, 2024

Government Research you can access



Since at least 2013, there's been a push from the President's Office of Science and Technology Policy to make as much as possible of government sponsored research available to the public for free.

On August 25th, 2022, this policy initiative of access to government research got another push when OSTP published more detailed and aggressive guidance to the executive departments.

What was behind pay walls and other barriers may now be in the free public domain. 
Your project might benefit.
Check it out.



Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, July 12, 2024

Software: Is it ever Done?



The software is never done!

Certainly not news; and certainly not profound to any engineer, coder, or PM working in the software industry, writing apps, or supporting software enabled devices.

Users have come to expect routine and regular updates to all things software.

Ooops, not so fast!
What about the automobile industry?
Traditionally: the car is done! 
  • Buy it; keep it; sell it, eventually. Never needs an upgrade!
But that tradition may be short-lived: Cars will need upgrades over the life-cycle

What now?
A few months after I bought a new car I got a recall notice to take it in for an upgrade to the transmission control software. That recall system is probably the way to keep software up to date for many of the in-vehicle software programs. 
  • But, is this only a warranty service? 
  • How long would manufactures support software upgrades for 2nd or 3rd party apps?
  • The life of car is 12 years+ for new vehicles. That's 'forever and forever' in the software industry, comparable to still supporting the iPhone 4! 
Big Tech
So-called big tech is taking over much of the user interface and user apps in new vehicles (except Tesla which does all its own in-house design and does not support Android and Apple apps on its user panels)

Long term support
As a project manager, what can you look forward to as regards long-term supportability of apps?
And, as a app developer, you may be one layer removed from knowing that it will find its way into the automobile industry. 
And as a business manager or customer support liaison, which customer are you supporting most?
  • The one that pays the bills (automobile manufacturer) or 
  • The customer that buys the car?
In the Agile sense of value-as defined-by-the customer, who is most influential?
Long term, these are my wonderments. I wonder how they would be written into project requirements?
  • I wonder if you'll be able to go to your local auto parts retailer and buy an upgrade-on-a-stick for legacy vehicles? 
  • I wonder if there is not a whole industry to be invented supporting older cars with updated interfaces?  
  • I wonder if its practical to expect the after-market developer to maintain currency for 'a long time'. 
  • I wonder if personal security, data security (nav data, for instance, but also other data about downloads to the car like music stations and podcasts, etc), and all the rest will not spawn a whole set of requirements, support issues, and a supporting industry?

Perhaps in the future, the mantra will have to be something like this:

The software is never done!
And, neither is the car!


Like this blog? You'll like my books also! Buy them at any online book retailer!

Tuesday, July 9, 2024

Is your enterprise Agile?




Applying agile methodology in your software project? Good!

Working for an organization large enough to be called an 'enterprise'? Probably that's good.
Why so?
  • Access to resources is the main reason. You may have heard that agile is all about small self-directing teams -- yes, that's part of the doctrine.
     
  • But how many teams are needed for your project? Dozens? Hundreds even? Where do those people, tools, training, facilities, communications, etc. come from? And who pays for all that?

  • Answer: the Enterprise.
Ah, yes, the enterprise has money!
And where does that money come from? Not you, most likely. Other people! So Other People's Money (OPM) is what is funding you.

Who are these people? If an enterprise, then there are going to be a wide variety: Customers (profit), or taxpayers (if you're a government enterprise), or donors (if you're a charity, church, etc), or owners (if privately held), or investors (if you're a publicly traded company)

Enterprise imposes expectations
So, here's the thing: even if you're doing Agile methods in an enterprise setting, the enterprise will impose expectations up front .... starting with: expected value return on resources invested.

It's natural that Agile people resist too much "up front" definition; we're about evolution and iteration. But there has to be a compromise.
It's the ageless problem: Other people's money! OPM comes with strings, to wit: a value return is more than expected; it's required.

Enterprise expects estimates (gasp!)
And here's another thing: the enterprise expects that you can estimate/forecast/predict what the resource requirements are that will get to something valuable (to the enterprise). In other words, it's rare indeed that there's a pot of gold you can dip into at your leisure and convenience, unless you're just a researcher working on your own small project.

Enterprise brings scale
What makes an enterprise an enterprise is size and scale. 
And what makes a project "enterprise" in it's scope is size and scale.
And there is the rub: Size and scale is always more than what a handful of people can carry around in their head. So, many others have to lend a hand and participate in making and supporting both size and scale. 

Scale is not just a lot of one thing, like a large code base. Scale brings breadth, meaning a lot of different things involved and integrated, and scale brings then brings rules, procedures, accountability, etc. into the frame because a lot of people have to work somewhat anonymously ... according to rules and procedures ... to bring it together, repeatedly, within predictable limits.

The question: "Can it be done what your doing 'at scale'?" Hand-crafted, job shop, one-off are not descriptions of scale.  

Enterprise brings rules
There will be a lot of rules. 
There will be a lot of rules because there will be a lot of people involved, many you will never meet, doing jobs for the enterprise you may not even be aware of, but these jobs will nevertheless touch your project or product.

Enterprise requires a narrative
Invariably, one of the rules is going to be you have to have a viable narrative to get resources committed to your project. The standard elements of the narrative you've heard before:
  1. Vision: What is envisioned as the benefit to the enterprise? Who are envisioned as the beneficiaries?
  2. Scope: What is it you're going to do (and what are the ancillary or consequential impacts elsewhere in the enterprise that you don't consider part of your scope?)
  3. Schedule: when can you likely produce results (no single point estimates, of course. It takes a range!)
  4. Resources: how much, and when (cash flow, and resource allocations)
I can't do this!
Narrative, estimates, rules, value commitments?
You're not enterprise-ready!

I almost forgot:
I wrote the book: "Project Management the Agile Way: Making it work in the Enterprise" 2nd Edition



Like this blog? You'll like my books also! Buy them at any online book retailer!

Friday, July 5, 2024

Activities, Results, Methdology



Back in yesteryear, I recall the first time I had a management job big enough that my team was too large for line-of-sight from my desk and location.

Momentary panic: "What are they doing? How will I know if they are doing anything? What if I get asked what are they doing? How will I answer any of these questions?"

Epiphany: What I thought were important metrics now become less important; outcomes rise to the top
  • Activity becomes not too important. Where and when they worked could be delegated locally
  • Methods are still somewhat important because Quality (in the large sense) is buried in Methods. So, can't let methods be delegated willy nilly
  • Outcomes now become the biggie: are we getting results according to expectations?
There's that word: "Expectations"
In any enterprise large enough to not have line-of-sight to everyone, there are going to be lots of 'distant' managers, executives, investors, and customers who have 'expectations'. And, they have the money! So, you don't get a free ride on making up your own expectations (if you ever did)

At the End of the Day
  • I had 800 on my team
  • 400 of them were in overseas locations
  • 400 of them were in multiple US locations
  • I had multiple offices
  • It all worked out: we made money!





Like this blog? You'll like my books also! Buy them at any online book retailer!

Monday, July 1, 2024

"Against the Gods"... a risk perspective



If you are in the project management (read: risk management) business, one of the best books that describes the philosophy and foundation for modern risk management is Peter L. Bernstein's "Against the Gods: the remarkable story of risk".

Against the Gods is historical, somewhat philosophical, and void of math!
It's a book for "thinkers"

Between the covers of this "must read" we learn this bit:
The essence of risk management lies in maximizing the areas where we have some control over the outcome while minimizing the areas where we have absolutely no control over the outcome and the linkage between effect and cause is hidden from us.

Peter Bernstein
"Against the Gods: The Remarkable Story of Risk"

Knowledge and control
Dealing with risk necessarily breaks down into that in which more knowledge will help us understand deal with risk (climate change), and that in which effects are truly random and no amount of additional knowledge is going to help (rolling dice).

Bernstein goes on to develop one of the key themes of the book which is the idea that probability theory and statistical analysis have revolutionized our ability to understand and manage risk.

Picking apart Bernstein's "essence" separates matters into control and knowledge:
  • We know about it, and can fashion controls for it
  • We know about it, and we can't do much about it, even if we understand cause and effect
  • We know about it, but we don't understand the elements of cause and effect, and so we're pretty much at a loss.
  • We don't know about it, or we don't know enough about it, and more knowledge would help.
Of course, Donald Rumsfeld, in 2002, may have put it more famously:
" ....... because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know."
No luck
So there is an ah-hah moment here: if all things have a cause and effect, even if they are hidden, there is no such thing as luck. (Newtonian physics to the rescue once again)

Thus, as a risk management regimen, we don't have to be concerned with managing luck! That's probably a good thing (Ooops, as luck may have it, if our project is about the subatomic level, then the randomness of quantum physics is in charge. Thus: luck?)

Indeed, our good friend Laplace, a French mathematician of some renown, said this:
Present events are connected with preceding ones by a tie based upon the evident principle that a thing cannot occur without a cause that produces it. . . .
All events, even those which on account of their insignificance do not seem to follow the great laws of nature, are a result of it just as necessarily as the revolutions of the sun.
Bernstein or Bayes' (with help from ChatGPT)

Following up on the idea of the knowledge-control linkage to risk management, Bayes' Theorem comes to mind. Bayes' is all about forming a hypothesis, testing it with real observations, and using those outcomes to refine the hypothesis, eventually arriving at a probabilistic description of the risk.

LaPlace, mentioned above, is one of the architects of the probability theory that underlay Bayes'.  Thus, one of the most interesting discussions in the book centers on Bayes' theorem, which Bernstein describes as "one of the most powerful tools of statistical analysis ever invented."

Bayes' theorem is a manner of reasoning about random and unknown effects and a mathematical formula that allows us to update our beliefs about the probability of an event occurring based on new evidence. It is a powerful tool for making predictions and decisions based on incomplete information, and it has applications in fields ranging from medicine to finance to engineering.

Bernstein's discussion of Bayes' theorem in "Against the Gods" is particularly interesting because he highlights the fact that Bayesian reasoning is often at odds with our intuition. Humans have a tendency to overestimate the likelihood of rare events and underestimate the probability of more common events. Bayes' theorem provides a framework for overcoming these biases and making more accurate predictions.

Cognitive Bias in risk management
Bernstein talks a lot about cognitive biases and their impact on decision-making under uncertainty.

According to Bernstein, cognitive biases are mental shortcuts that people use to simplify complex decisions. These shortcuts can lead to errors in judgment and decision-making. Cognitive biases can be influenced by a number of factors, including emotions, personal experience, and cultural values.

Some examples of cognitive biases that Bernstein discusses in the book include the availability bias, which is the tendency to overestimate the likelihood of events that are more easily recalled from memory; and the confirmation bias, which is the tendency to look for information that confirms our existing beliefs and to ignore information that contradicts them.

One key point Bernstein makes is that humans have a natural tendency to be overconfident in their abilities to predict and control events. This is known as the "illusion of control" bias. People often believe they have more control over events than they actually do, leading them to take on more risk than is rational.

Another common cognitive bias is the "confirmation bias," in which people seek out information that confirms their preexisting beliefs, while ignoring or dismissing information that contradicts those beliefs. This can lead to a lack of objectivity in decision-making.

Bernstein also discusses the "hindsight bias," in which people tend to believe that an event was more predictable after it has already occurred. This bias can lead to overconfidence in future predictions, as people may believe that they could have predicted the outcome of an event that has already occurred.

Overall, Bernstein suggests that understanding and being aware of cognitive biases is essential to making better decisions and managing risk effectively. By recognizing these biases, individuals can take steps to mitigate their impact on their decision-making processes.


Like this blog? You'll like my books also! Buy them at any online book retailer!