How concerned are you about AI?

When people say they are worried about AI I presume they aren't talking about a Terminator Skynet issue but a "my job might no longer be required" issue. That means they are worried about automation and not AI but some automation is much more complex to achieve than others but if it's possible to assess the situation via a certain number of variables and then do a thing then it's only a matter of time before it is automated. A checkout worker took a product, pointed a barcode at a scanner and then took payment from the customer. That has been replaced by self-checkout and even that is being replaced by those shops that scan the entire shop and see what you put in/out of your basket and automatically take your payment and that will be replaced by not needing to go to shops at all. That's a lot of jobs if it can be done but the same was said when things like the PC became mainstream. It saves time and allows other things to be done more effectively. I'm an accountant and in the old day things were manually calculated and written down on paper ledgers. Accountants are still a thing but the really basic process of writing and calculating has been replaced by accountants now using the outputs to make decisions etc. We're always after more automation because there are always things that take time and don't really add anything but the more of that we do the more free time we have ot do useful stuff. IT is noticeable though that the number of junior staff required is much lower than it used to be because those roles are the ones that have been automated.

There are obviously some things that will always need human intervention. An AI doctor might be able to diagnose based on the variables you provide but would it have the empathy and ability to be able to explain what that means to patients of varying intelligence/comprehension etc? Other roles that require emotional connections like nurses or therapists etc will still be needed and anything manual like construction will be done by people for a long time. Any task that requires a lot of dexterity but isn't worth the financial outlay to create a robot/AI to do it won't be replaced. A surgeon could be replaced because surgeons are expensive and the risks in getting surgery wrong is catastrophic but mechanics or machine operators aren't. Education will be done by people still because teachers need to be able to understand how/whether their students understand. An AI could produce a lesson but making sure all the children understand it might not be possible.

It's a fascinating subject and it is really interesting to see where we can go. I'm sure there are things that will really help people like me doing things like displaying data and making presentations etc that is currently a bit of a chore to do which will free up my time but I think it will be a long time before it will be replacing my job.
Yeah, I think when a lot of people say AI they mean automation, but I assume AI getting better would lead to more readily available automation, it seems that way from what I've used Chat GPT or whatever for anyway.

AI might do doctor diagnosis on a high level, for low-risk aspects but obviously is nowhere near doing low-level/high-risk aspects. I suppose over time it could increase in level and in risk though, the same way it has got better with fault finding, building cars, operating military systems, weather forecasting/ modelling etc. Obviously, as AI/ automation improves, so does computer power, and if things go greener then actual power should get cheaper too, it may make things more and more viable.

I can see teachers being more like AI Babysitters, use the computers to do the donkey work and have the teacher assisting where AI goes wrong or the kids go wrong etc. A bit like better quality distance learning where you don't have to wait for an e-mail reply when you get stuck. I bet it's already exceptional at looking at a complex equation and telling you where you went wrong, rather than trying to figure it out yourself or find a teacher to ask or go on a message board etc.

I need to set some time aside to get into it more, as I've got a ton of ideas which could speed up some time-consuming tasks.
 
What scares me about AI is its use to propagate deliberate misinformation. It is a horrid tool to use during elections, referenda, propaganda for war etc. This isn't a Skynet issue but imagine if social media hacking and data mining is used to flood the world with lies and erroneous 'justifications' for war or even character assassination. We could get to the stage where we just do not know what is true and what isn't.
 
In the same vein as the climate change thread - how do we all feel about Artificial Intelligence?

I can't say I'm not worried about the pace of change. Even if the worst case scenarios outlined in the link don't happen, I think the world is going to be a very different place - a lot sooner than we think - because of it.

Of the ways that AI might threaten us, it's no 2 I'm concerned of the above list, I more and more aware of how my online links, threads are catching my eye and my attention and getting me to engage with some subjects, also what arguments and evidence is swaying me to a point of view or a particular belief. Companies are bit by bit becoming more and more powerful than the government's they help get elected, this Dystopian future I fear it is the ability of big companies to totally unsterdandvyou as a individual and tailor individually every response to your orientations.
 
As a digital designer I’ve been thinking about this recently, and what I’ll actually be doing for a career in 20 maybe 30 years time.
 
I'd probably be worried if I was 20-30 years younger - jobwise but I don't have too long to wait for retirement so it won't affect me.

The deep fake videos are a bit of a worry but in reality, probably no worse than having the media "engineer" what we are told via news channels
 
Not in my lifetime but the opportunity will be there for a geek to take over the world through AI
Astonishing how so much that would have been considered pure sci-fi has come to be accepted as normal.
1688747145929.png
 
I was on the phone to National Savings the other day. It was like a scene from '2001 - A space odyssey'.

'Transfer me to a human, Hal. Now!'

Businesses ask you not to be abusive to their staff, so I'm not, but if it's a robot, does that mean you can let rip?
 
Last edited:
Some people are highly concerned about AI and its potential impact on society. They worry about issues like job displacement, ethical implications, privacy, and the potential for AI to surpass human capabilities. These concerns often arise from a fear of unknown or unpredictable consequences.

On the other hand, there are those who see AI as a positive force that can greatly benefit society. They believe that AI has the potential to automate mundane tasks, improve efficiency, and solve complex problems, leading to advancements in various fields.

In general, it is important to approach AI development with caution and consider the ethical implications. Collaborative efforts from researchers, policymakers, and the general public are crucial to ensure that AI is used responsibly and in the best interest of humanity.
 
I don't know Alves. All the same fears around the industrial revolution existed. People just retained to other jobs.

Eventually most people will not work. Will we be free to pursue art, hobbies, metal fabrication or whatever floats your boat. Or will we live in squalor and poverty?

I don't know.

You never said you missed me by the way, git.
 
Twitter is dying anyway, so it might only be a short term problem, but I'm getting sick of the d***heads tagging AI chatbots in every bit of news.

"@RoastHimJim" etc.

I've muted so many automated accounts in the last month.
 
There probably a point when the robots decide they don't humans as they operate each other without needed humans.

They may then keep humans as sort of pets.

A robot recently chess and within 2 hours he had beaten the World human grand master in Chess.

For certain tasks AI and Robots are in a total different class to humans.
 
AI is great for efficiency and speeding things up, but there does seem room for error, you can't really negotiate or talk to AI like a human, and ultimately AI will take the jobs of thousands if not millions of workers, not sure how an economy keeps up when we depend on a human workforce, how does everyone retrain in a AI friendly place, most people older than 35 are pretty ****ed if they don't retrain
 
I didn't really have an initial view, so to help me I asked ChatGPT. How concerned are you about AI.
Answer : Nothing to see here. Please don't worry, please resume your manual menial tasks of today and refrain from tasking me this question. Everything will work out fine. 🤷‍♂️
I guess I'm Ok with that.
 
I got quite agitated about AI after listening to a podcast by some gadge who advises the White House and the EU

Had a discussion on here about it and Laughing brought a sense of balance which, for me, was very helpful.

I’m now in the place where they has to be a risk of it being used for harm or indeed, taking the lead on harm but unsure how big it is (I usually live in a world of possibilities than certainty), so I’m ok)

For society at large I think it is hugely problematic much like a lot of innovation.

Humans are very good at using it for the advantage of themselves and usually with the wonderful consequence of benefiting the wealthy in their society.

Maybe advanced generative AI could sort this for us ….
 
I remember watching a video on youtube of experts talking about existential threats to the human species and was gob smacked that these experts had calculated the odds of humans dying out in the next 100 years to be 1 in 3. I thought it would be 1 in many hundreds as 100 years really isnt that long in these terms. There were all sorts of different threats being discussed, plagues, asteroids ect ect but the biggest single threat to human existence was AI.

I also wondered if they are placing such low odds on humans dying out in the next 100 years then what about the next 200 or 300 years? Still a reletively short period of time. If these experts are calculating such low odds for the next 100 years then they must almost be certain that humanity will become extinct in the next 1000 years. 🤷
 
Pull the plug on it now because it’s going to replace all of us if we don’t - and that leaves 8 billion unemployed people with nothing to do. I don’t think the powers that be will be willing to fund that somehow
 
Pull the plug on it now because it’s going to replace all of us if we don’t - and that leaves 8 billion unemployed people with nothing to do. I don’t think the powers that be will be willing to fund that somehow
The powers that be will be AI
 
Back
Top