I find it concerning reading some of the stuff Starmer and his team are saying about how AI can do this, that and the other and it how it can help deliver up to £45bn in savings. It *sounds* good but there’s a huge amount to unpack.
For example, I’ve worked as a copywriter for years now, and one leading high street fashion retailer I freelance for is using AI to write large swathes of its online product descriptions. When I work for them my job is to edit what AI has produced and get it into shape to be published. The stuff AI is producing needs to be drastically edited before it can go anywhere near the company website. It’s almost always incorrect, repetitive or frequently not even relevant to the product I’m working on. It’s my first experience of working with AI and if it’s like this in other places and industries then good god, we are in trouble.
Talking about running government departments on it, delivering public services with it, just seems like absolute fantasy imo. But as I say, it’s the new thing everyone is talking about so I can see why politicians think it does *sound* good to talk about it too. I just think, who’s controlling it, managing it, editing it, etc. Who owns this AI and is it just another load of tech the already wealthy can buy, own, invest in and make money on? At the moment I think it’s something companies or politicians are talking about because potential wealthy investors and supporters like idea of using tech, being associated with it and with companies or organisations that are into it.
There’s talk of scrapping up to 10,000 civil servants. Okay, fine. But are there any workings about what happens to those people next? Any thoughts on what they’ll do, potential benefits they’ll be entitled to, the knock-on effects and costs of health and well-being. It’s okay saying “we’re going to save x amount of billions by slashing this, that and the other” but that’s only a tiny fraction of the story.
There’s a bit of a blame culture going on imo. Computers can do this better, slash this, get rid him, him and her, cut that, save this and that. We don’t this, that or them. It’s dangerous rhetoric and it doesn’t really feel any different to what’s gone before tbh.
How were they using the AI to do it, did they have something custom made up which they hired in an expert and spent a lot of time to create it? Or was it just some random person chucking crap into a free chat GPT account, and then sending you the results?
The AI can do it, certainly something relatively simple like a product description, no doubt about it, and it can do 100x more complicated than that. But with AI the output is extremely reliant on the input, **** in = **** out etc. Even basic mistakes like using an old free version of Chat GPT and comparing that to something like the latest version of Claude (which is much better at writing apparently).
A lot of people have used AI, and might get 90% of a result of what they had previous in 10% of the time, it sounds like a downgrade but it isn't, as time is valuable. Obviously with health and things like that treating someone 90% rather than 100% might not be good, but if you treat 10 people 90% in the same amount of time, then you cut wait lists and can catch things early, it's likely way better than treating one person 100%, but after a 1 year wait, and the other 9 waiting longer than that. Things like image recognition with scans, running blood tests, whatever, all very basic stuff for AI already and can already do it with better accuracy and much faster than a human.
I've been looking at all of this quite a lot, as I basically want to replace myself with AI, and then just monitor the AI, but it's going to take a lot of time to set it up properly, which I've not had time for yet. I've created a lot of tools now, which haven't really saved me much time yet, as I don't really do a lot of volume, but the quality of the outputs I'm now giving out are 5x better. It's not that I couldn't have made them 5x better myself but the time taken to repeat that every time would have been ludicrous.
A big thing with AI now is prompt engineering, actually writing the prompt for the AI is a skill in itself, and the vast majority of people are lazy with it (myself included). Most people tend to start off with a one liner prompt, get a half assed response and then fight it for an hour to get what they want, it's not the way to do it. There are even AI programs specifically designed to write you the perfect prompt.
Then you can go one stage further and use custom GPT's and train the AI specifically on what you want, till it basically becomes an expert in that.
Then you can go one stage further than that using AI agents which can basically act like helpers, trawling sites, finding out info on competition/ competitor products etc.
Someone clued up with AI, given a week or a month, could replace 20 or 2000 people and give out a better result, but it does need time setting up, and the right person doing it. The problem is there are not that many skilled at that as it hasn't been around very long yet, and a lot of the tools are still in infancy and fairly complicated to use, but this is changing, very quickly.
What we need to be doing now is actually training people on AI, and how to use it and build things, as well as being able to monitor outputs, to get the most out of it. If Labour are looking at AI they're looking at the present and looking forward, which is a good thing, as it can't be ignored.