Or, sometimes, you order the factory to _reduce_ output to 50% of what it can do for the last week of Q1 so you don't have excess unsold inventory on the books.
Then in Q2, you panic because you don't have enough inventory, so you order the factory to produce at 150% to catch up. Both 50% and 150% are inefficient factory states; if you weren't thinking about snapshot reporting you'd have just let it run at 100% and your Q1+Q2 results would be overall better.
I have personally seen this happen at a household-name Fortune 50 company. It's insane and causes real damage to the business in many ways.
Yes, but focused on it being the highest it possibly can _tomorrow_ or the highest it possibly can be in ten years is a huge difference. Only some executives have the ability to take actions based on a long view without being replaced by the board. Usually founders and near-founders.
Right so if you are already hyperfocused on tomorrow then focus on the end of the quarter is pretty much a wash in terms of short- versus long-term decisionmaking.
Starlink very likely leans toward “many cheaper satellites that may fail” instead of “fewer expensive satellites that are less likely to fail”
Their advantage in the satellite-internet industry is that they can launch stuff fast and cheap; very likely this drives different tradeoff decisions than the regime this article talks about.
Having thousands of satellites also allows finding more software bugs, so that in the reality they can be more reliable compared to NASA-style probes (when each one has its unique software).
The Starlink tangent misses something important about why software reliability in satellite systems is categorically different from hardware reliability.
We need to come up with a catchy buzzword salad to market to executives. Something like "increased communication efficiency between workers by direct brain-email-brain interface"
reply