A recent paper suggested that people like me are among those most at risk of being made superfluous by Artificial Intelligence (AI). I work on health and development issues for a communications agency and do a lot of writing. Some of it is press releases and briefings for journalists. I am, though, delusional enough to think that my job is safer than that of many doctors.
Muckrack, a press release distribution company, says it can write materials based on just a little information about a company and its product. I told it to write a press release with only the instruction to introduce, “a new vegan ghee product targeted to health-conscious but traditional Indian women”.
Here’s Muckrack’s effort, generated for free in under a minute. “Ghee-Free, a leading vegan food company, has launched a new vegan ghee product aimed at health-conscious but traditional Indian women. The product is made from plant-based ingredients and is free from animal fats, making it an ideal choice for vegans and those who are lactose intolerant … traditional ghee is high in saturated fats and cholesterol, which can be harmful to health if consumed in excess. Ghee-Free Vegan Ghee is a healthier alternative that provides the same rich flavour and texture as traditional ghee.”
In truth, this AI would replace most of the PR writers I’ve met. That’s because they’re not very good and Muckrack is just aping their bad writing. We have a client — I’ve changed the name because they pay me — which insists on beginning every release “Dullco today announced….” Obviously, it’s not news, as I keep telling them to their intense irritation. “The Emperor Ashoka today announced” would be news, as the dead are not in the habit of announcing things. “The Prime Minister today announced” would be news had he been silent for a month, otherwise I think editors would tell the reporter to lead on what the PM had announced.
My vegan ghee release should have started with a heart attack victim’s life of ghee consumption; with the heart-rending story of a woman unable to eat with her husband because of her lactose intolerance; or maybe a sentence or two on animal welfare and the demand for milk. Something, in other words, that will make the reader care and read on. A person has to go and find that unfortunate housewife or heart attack patient (AI is very good at “hallucinations” where it makes up stories and data, but many media outlets take a dim view of this kind of thing and the accompanying photo would be problematic). Only a person has enough empathy for cows to think of the starving calf angle (AI doesn’t do empathy, although it can learn to mimic it and its current lack of empathy says something about the human condition.)
Cameras in every phone and AI-driven editing software have not finished off photographers, although it did put the corner photo studio, with its washed-out backdrops, out of business. One of my colleagues, Ivan Ruiz, told me, “photo editors had always been limited to the analogue and very manual processes of dark-room editing, layering, and developing. While imaging was revolutionised and changed forever when Photoshop came out in the late 80s, it and today’s sophisticated tools haven’t replaced the eye of the photographer… Someone needs to have a creative vision for what the final product will ultimately be.” AI, even generative AI, doesn’t have vision.
Maybe my policy analysis work is more at risk, but, if, for example, a pharmaceutical company asks me about future market opportunities for advanced cholesterol medicines, they are assuming I understand why they want to know. A machine will do better than any of us will at reporting epidemiology, trends, treatment capacity and the resulting market size. Pharmaceutical companies, though, want to know whether anyone will actually pay for their products and that is much more about reading the tea leaves.
A Russian colleague told me she was interviewing a senior government official about the massive problem of HIV in the country’s prisons. What he actually said could have come from the manifesto of a well-meaning NGO, but as he said it, he smiled in the wrong places and made hand gestures that did not suggest he was part of humanity’s empathetic vanguard. In the end, my colleague prodded him a bit: “Some people say that these prisoners will be dead by the time they’re 35 and we shouldn’t worry too much about what they die of.” The official nodded enthusiastically before she could ask what he thought of the statement. It was clear that no Russian money would be going on more tolerable HIV medicines — clear to her, but probably not to any AI analytic programme.
We work in a murky world where we need to stir emotions and read subtexts. So, for example, do doctors and nurses in humble primary care: Does the patient really want to know his prognosis? Was there something about the way the woman complaining of a sore throat paused before she said she didn’t have a problem swallowing? Expensive medical specialists, though, work in a much simpler world: Their job is to figure out what’s wrong using diagnostics and then to do their best, based on the facts of the case and the evidence on treatments, to gain affordable quality-adjusted life years for the patient. That’s the kind of work AI is very good at.
All of us will need to shift from being inefficient data processors to having the vision to tell the machines what might be and what must not be. We will also evolve the ability to make ourselves intelligible to Artificial Intelligence and to make AI genuinely useful to human aspirations and needs.
This article is authored by Mark Chataway, managing partner, public health, Europe, Middle East and Africa at FINN Partners.