Will A.I. Become the New McKinsey? Yes. And it will begin when McKinsey and the other big consultants start to apply A.I. routinely. If firms like McKinsey are “capital’s willing executioners”, A.I. will at first merely sharpen their axes.
There’s a whole moral dimension to this question – using consulting companies to do corporate dirty work while the owners duck responsibility – which I’ll skip over. I want’ to look at the concrete changes to existing business processes which these consultants will soon make with A.I. and how it builds on what they already do.
Before speculating about the logical end state for A.I. and every possible downside it’s helpful to look at where we are today. Consulting companies were already replacing office workers before the latest A.I. boom. The simplest step one could imagine would be that they improve on existing offerings. There’s A.I. hype – and they’ll use plenty of that in their sales pitches – but there exist real functional enhancements A.I. will bring almost immediately. Maybe it won’t be earth-shattering for a few years.
A business may bring in consultants to put a stamp of approval on a large layoff the client already wants, and they could do the same with A.I. (“the algorithm indicated this is the best path forward… “) It’s not clear to me that A.I. would make this behavior much more common. Maybe. However, even after ruthlessly shrinking their work-forces companies have real tasks to accomplish. Cutting labor out of the actual remaining work is where A.I. will have a measurable negative effect on labor (positive for the bottom line though) very soon.
Some savings can’t be had without A.I. (assuming it works as advertised.) In the Longer term it will be used for all sorts of things consultants do now to improve efficiency and boost sales that extend into decision-making, but not at first.
Where It Begins
Businesses already replace office jobs with computers using RPA (Robotic Process Automation.) Traditional RPA captures and reproduces repetitive clerical tasks (using office tools and enterprise software) with “software robotics”. In some cases, these “bots” are just scripts that move the mouse around and through UI automation pushing buttons, etc. as the user would. They can read from text in one application, apply some simple rules and take actions on other applications. This is about as brittle as it sounds but does work.
Imagine a supercharged version of RPA (Robotic Process Automation.) Now consultants can replace workers more efficiently and the bots will be slightly better than current RPA scripts. Crucially, the bots may become substantially more robust. Oh, hang on you don’t have to imagine it, IBM is calling this “Intelligent Process Automation.”
Enhancing RPA with large language model driven A.I. – building on simpler machine learning and script powered RPA – could add whole new capabilities like actual good report writing and rudimentary decision making. We’re reaching the point where that is technically possible and likely affordable. It’s possible to hook up the output of a LLM to an execution engine and feed results back in as a prompt, to keep the work in a linear session giving the A.I. a rudimentary short-term “memory.”
How Imminent is all this, really?
A GPT-4 level model isn’t necessary to get real benefits from an LLM. A small model running on a desktop – this is easy today with a four-core 16GB office PC – can, with a trained model, fill out simple reports and forms and generate step-by-step task lists faster than a human can, and more predictably too. The first iterations of this kind of software will be clunky scripting and LLM integrations where scripts extract the output and provide new inputs. But that’s just right now.
There’s rapid progress in the field of self-hosted small models runnable on average to high-end desktop systems. I can already think of several ways to set something up at home if I wanted. See here for specific directions.
The important point is that the cost is relatively low and no one entity will have control of the use and spread of these A.I. powered tools. More than that, regulation will be a practical impossibility. See the recent “llama-cpp,”, “alpaca-cpp”, “Vacuna” that emerged this year. Here are some projects to watch: Oobagooba, and KoboldAI (a system focused on writing and story telling.) And there are so many more. The TextSynth product focuses on providing a common API to many A.I. models making them more easily placed into a larger automation framework. It lets you do text completion, translation and text to image types of model hosting.
The choke point today is training new models which takes a lot more computing resources than just running an existing model. We’re talking on the order of two to twenty-five million dollars. The training cost will keep dropping. Also, once a few generic “office work” models get leaked into the wild all bets are off.
Image recognition and generation, as well as good language translation models have rapidly improved recently. I don’t know quite how these fit into turbo-charged RPA but I’m sure they will find a role. My main point is that recent advances can fit neatly into existing automation structures, and we don’t need to wait for the perfect sentient A.I. before seeing big effects.
If This Goes ON
A.I. represents the latest step in the direction of automating all labor. The microcomputer (I include office and home PCs and smartphones in this category) eliminated whole types of jobs and businesses. No more One Hour Photo places, to name but one. That trend is set to accelerate now Another thing the computer did was give bureaucracies enough free time to double and triple “paper-work” now that it’s not on actual paper.
It’s hard to see how to prevent automation from creating more inequality. Perhaps we can use it to multiply the busy work even more? Somehow I don’t think this will happen once people are cut out of the loop. If human labor becomes less necessary to operate a company, the owners will by definition keep a larger share of the profits. Companies that don’t embrace automation will fail to compete.
The only way out I can see requires radical change to employment regulations and taxation. Right now, In the U.S.A. companies must pay income tax and social security taxes: Sure, the workers think they’re paying, but look at it another way. For every worker a company has to spend the pre-tax salary. But the worker only takes home the post-tax amount. On the other hand, a software bot or hardware robot pays no income tax. It’s a business expense which can be deducted from the company’s taxes. So, if a worker and a bot are equally efficient, the worker is at a big disadvantage because the bot costs the company around forty percent less. At the very least the bots should pay taxes too. Somehow the playing field should be leveled. Eventually the bots will still outperform the humans though.
UBI (Universal Basic Income) isn’t a good answer. It provides a floor but doesn’t do much for massive inequality, the very existence of which corrodes society. Also, while we haven’t really run this experiment it seems like basic income without any responsibilities just feels bad. People need something to do and a reason to do it.
Reforms that could actually improve A.I. boosted inequality are really radical. Somehow, everyone ought to benefit from the increases in productivity automation provides. How even you think that distribution ought to be is an ideological question.
The old idea that workers are rewarded according to their level of effort doesn’t hold much water once you introduce rapidly improving A.I. to the picture. We’ll probably keep hearing it though. (Owners and top leadership need to explain themselves after all.) Usually, this notion is offered up simultaneously with the contradictory excuse (true) that pay matches the market, not the value the worker provides to the firm. They only pay as much as it takes to get workers to keep showing up for their jobs. Once A.I. puts large categories of people out of work at once, people who the day before were getting paid “fairly” for their skills, this fact will be much more obvious to everyone.
In our current capitalist context, sharing in A.I. gains means everyone needs to own shares in businesses using A.I. How do we do that in the U.S.? Nationalize companies? Force companies to trade their shares to the government for tax credits? Does the government redistribute these shares like a social security benefit? An even more radical idea would be to introduce a heavy wealth tax which would be simpler to administer – but harder to enforce – than the share distribution scheme. None of these will happen. Well, maybe the tax credits idea could get somewhere.