How concerned are you about AI?

LaPennaBianca

Well-known member
In the same vein as the climate change thread - how do we all feel about Artificial Intelligence?

I can't say I'm not worried about the pace of change. Even if the worst case scenarios outlined in the link don't happen, I think the world is going to be a very different place - a lot sooner than we think - because of it.

 
I have already given my opinion on this topic. What seems to be worrying people is the advance of generative AI. It seems a lot more impressive than it is.

AI has been around for decades and has been incredibly successful, outstripping humans in specilised fields. Contract reading in law would be a great example. Finding cancer markers in blood would be another.

Generative AI seems impressive, and intelligent, because it mimics what humans can do, draw a picture, write a poem, or a piece of music or write code.

The thing is, these things look impressive, but AI is still rubbish at this when compared to a human expert. Amazons code whisperer is about state of the art for code writing. It's still quite poor, and despite being labelled as generative, it's not very imaginative at all.

Nobody was worried that AI can read a contract and apply the law to it in a fraction of the time it takes a lawyer to do the same job. Nobody complained that AI was a danger then. Simply because to a layman, it doesn't look impressive. ChatGPT writing a song for Issiah Jones seems much more impressive. It really isn't.
 
That's the crux of the matter, it isn't really "intelligent" in the way we are. Well, not yet and probably not for a while.
We are a long way from Terminator territory.
 
That's the crux of the matter, it isn't really "intelligent" in the way we are. Well, not yet and probably not for a while.
We are a long way from Terminator territory.
I think to get to terminator apocalyptic scenarios AI has to become conscious. We don't even know how that works. One theory is that as soon as a system becomes suitably complex, consciousness emerges. Lots of biologists disagree with this theory.

I have no idea and I am not a biologist.
 
Not at all, I'm a senior consultant and have dabbled with GPT to produce code etc...

It's just not very good and is getting worse with every iteration.

We're in an AI stock bubble where everyone and their dog says AI 30,000 times in their financial presentation to pump their share price.

Like my old friend @Laughing alluded to it's currently hype over substance.

This could change in the next few years and a close friend of mine who's in deep with Microsoft says they've got some mind blowing stuff coming in the next 6-12 but its not an AI that is learning by itself exponentially.
 
As an academic, we've seen a wave of AI generated assessed work in the last year.

We have software that detects this, but that leaves me slightly uncomfortable. I don't actually understand how it detects AI content. Also, I only know it's spotted the ai it has spotted: did other AI make it through undetected?

I've got to say that the material containing AI has been pretty shoddy so far though. It wouldn't pass even if it were the student's work
 
Interesting stuff above.

The hype has been happening since at least the late 1970s. I attended lectures on AI and "machine learning" in 1977. When I had just started at university.

These days I am not involved in the IT industry any more. So I cannot really comment. Apart from saying that worries about AI have been around now for pushing 50 years.
 
Interesting stuff above.

The hype has been happening since at least the late 1970s. I attended lectures on AI and "machine learning" in 1977. When I had just started at university.

These days I am not involved in the IT industry any more. So I cannot really comment. Apart from saying that worries about AI have been around now for pushing 50 years.
I remember when the next big thing was the bucket brigade algorithm and evolutionary programs written in lisp. These would generate new code based on a bunch of programs all written to solve the one problem. The the most successful programs would split and recombine with other, successful code to cfreate new code. Random changes were introduced into the offspring too.

It was exciting, and worked for simple problems, search algorithms, for example.

Deep learning is also a misnomer. The "deep" part in the name refers to the number of hidden layers in a nueral network. It doesn't mean that the learning is deep or meaningful. It's a bit of AI spin.
 
Anyone who is interested in AI and machine learning in particular, Amazon run an online university. It's free, and a bit grandly named. Nevertheless it exists and is a solid starting point for all things machine learning. Thee are also deep learning models that you can use for free, albeit to a limited extent. If you are serious, you can pay for access to the models. They are generative, so a little limited, things like natural language processing or image generation.

It can be found here : https://aws.amazon.com/machine-learning/mlu/
 
I think it will take over some people's jobs faster than people think, but don't think this would have to be a bad thing, if handled correctly. The problem is it probably wouldn't be handled correctly and would be profiteered the **** out of.

Ultimately loads with low-skill admin/ office/ warehouse/ driving jobs might have to compete with robots or computers, which can learn faster, work faster, work longer, and improve faster. Hard to compete with that, and it's only going to get better, it's not going to get worse, there's too much money and opportunity in it for companies to develop it extremely quickly, which is why all the big players are doing it. They're not blowing all this money on something which they don't believe in, and they know more than anyone.

People often think it needs to be better than experts in the field to actually "work", but I don't think it does. It only has to do better than the lower 10%-50% or whatever, and it will start with the lower end first, and work it's way up.

The thing is AI will help people learn faster, so theoretically people should become more skilled, and those who put more effort in to learn will gain the most, and those who don't will lose out. AI could sort of put itself out of a job, in some instances.

I've used it to learn a few new things, and it's a really rapid way to learn, or to learn enough for what you want to do. Also it's good for a lot of automation/ manual stuff, and wish I had more time to work more on this. Quite ironic that I don't have the time to do something which would save me a lot of time.

Could end up with a scenario like this:

Effectively instead of having 5 experts and 5 basic staff it could end up with 2 experts and 2 basic staff babysitting AI doing the rest. That's 6 people out of work, or doing different work.

The experts who retain jobs would probably end up being paid more, the basic staff wouldn't as there's more competing for the roles and the lower end being paid less or nothing.

Then because there will be tons out of work there would be some kind of revolt/ collapse, or tax of those who do work would be absolutely massive, it could end up bringing back some sort of balance, but I doubt it.
 
When people say they are worried about AI I presume they aren't talking about a Terminator Skynet issue but a "my job might no longer be required" issue. That means they are worried about automation and not AI but some automation is much more complex to achieve than others but if it's possible to assess the situation via a certain number of variables and then do a thing then it's only a matter of time before it is automated. A checkout worker took a product, pointed a barcode at a scanner and then took payment from the customer. That has been replaced by self-checkout and even that is being replaced by those shops that scan the entire shop and see what you put in/out of your basket and automatically take your payment and that will be replaced by not needing to go to shops at all. That's a lot of jobs if it can be done but the same was said when things like the PC became mainstream. It saves time and allows other things to be done more effectively. I'm an accountant and in the old day things were manually calculated and written down on paper ledgers. Accountants are still a thing but the really basic process of writing and calculating has been replaced by accountants now using the outputs to make decisions etc. We're always after more automation because there are always things that take time and don't really add anything but the more of that we do the more free time we have ot do useful stuff. IT is noticeable though that the number of junior staff required is much lower than it used to be because those roles are the ones that have been automated.

There are obviously some things that will always need human intervention. An AI doctor might be able to diagnose based on the variables you provide but would it have the empathy and ability to be able to explain what that means to patients of varying intelligence/comprehension etc? Other roles that require emotional connections like nurses or therapists etc will still be needed and anything manual like construction will be done by people for a long time. Any task that requires a lot of dexterity but isn't worth the financial outlay to create a robot/AI to do it won't be replaced. A surgeon could be replaced because surgeons are expensive and the risks in getting surgery wrong is catastrophic but mechanics or machine operators aren't. Education will be done by people still because teachers need to be able to understand how/whether their students understand. An AI could produce a lesson but making sure all the children understand it might not be possible.

It's a fascinating subject and it is really interesting to see where we can go. I'm sure there are things that will really help people like me doing things like displaying data and making presentations etc that is currently a bit of a chore to do which will free up my time but I think it will be a long time before it will be replacing my job.
 
Interesting stuff above.

The hype has been happening since at least the late 1970s. I attended lectures on AI and "machine learning" in 1977. When I had just started at university.

These days I am not involved in the IT industry any more. So I cannot really comment. Apart from saying that worries about AI have been around now for pushing 50 years.
in 1987, when I started University I was writing genetic algorithms which was machine learning but needed a human to verify the results. It was slow and prone to error, like its creator.
 
I think it will take over some people's jobs faster than people think, but don't think this would have to be a bad thing, if handled correctly. The problem is it probably wouldn't be handled correctly and would be profiteered the **** out of.

Ultimately loads with low-skill admin/ office/ warehouse/ driving jobs might have to compete with robots or computers, which can learn faster, work faster, work longer, and improve faster. Hard to compete with that, and it's only going to get better, it's not going to get worse, there's too much money and opportunity in it for companies to develop it extremely quickly, which is why all the big players are doing it. They're not blowing all this money on something which they don't believe in, and they know more than anyone.

People often think it needs to be better than experts in the field to actually "work", but I don't think it does. It only has to do better than the lower 10%-50% or whatever, and it will start with the lower end first, and work it's way up.

The thing is AI will help people learn faster, so theoretically people should become more skilled, and those who put more effort in to learn will gain the most, and those who don't will lose out. AI could sort of put itself out of a job, in some instances.

I've used it to learn a few new things, and it's a really rapid way to learn, or to learn enough for what you want to do. Also it's good for a lot of automation/ manual stuff, and wish I had more time to work more on this. Quite ironic that I don't have the time to do something which would save me a lot of time.

Could end up with a scenario like this:

Effectively instead of having 5 experts and 5 basic staff it could end up with 2 experts and 2 basic staff babysitting AI doing the rest. That's 6 people out of work, or doing different work.

The experts who retain jobs would probably end up being paid more, the basic staff wouldn't as there's more competing for the roles and the lower end being paid less or nothing.

Then because there will be tons out of work there would be some kind of revolt/ collapse, or tax of those who do work would be absolutely massive, it could end up bringing back some sort of balance, but I doubt it.
It's not low skilled workers that are in danger. More the white collar middle income earners.

The low skilled workers have been the target of the industrial revolution and automated machinery. It's our turn now.
 
It will take time but 5-10 years is more realistic.

There will be leakage before then obviously.

Just more of us living in poverty, what could go wrong.
It's not low skilled workers that are in danger. More the white collar middle income earners.

The low skilled workers have been the target of the industrial revolution and automated machinery. It's our turn now.
 
It will take time but 5-10 years is more realistic.

There will be leakage before then obviously.

Just more of us living in poverty, what could go wrong.
I don't know Alves. All the same fears around the industrial revolution existed. People just retained to other jobs.

Eventually most people will not work. Will we be free to pursue art, hobbies, metal fabrication or whatever floats your boat. Or will we live in squalor and poverty?

I don't know.
 
Back
Top