• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

ChatGPT and the (near) future of AI

Capitalism as we know it is destined to fail because there will simply not be the need for everyone to work to produce goods, but there is the need for everyone to work to buy them.

The question from there becomes what do you replace it with, how do you allow people to have ambitions and to compete and to define themselves without that.
You don't! If there was ever an argument for population control or depopulation conspiracies, this is it ladies and gentlemen....
 
It's the end times, the end times have come!!!
Terminator.gif


Probably not, but this will ruin a lot of entry and mid-tier programming jobs that are mundane. I don't believe it will get to the point where senior software engineers would be made redundant as those people will be pushing the software further. I'm personally fearful of automated robots myself that can interact and look as humans do. The Tesla Bot for example being a prime example as something out of I, Robot.

teslarobot1.png
irobot_featured.jpg
 

XXL

Member
Its ironic that everyone was so convinced that factory workers and laborers were going to be the first set of jobs to be entirely replaced.....and oh how wrong they were.

In reality...artist, writers, musicians and creative positions are 100% going to be the first set of jobs that get replaced entirely by AI.

No one would have predicted that.

The next set will most likely be programmers, analysts, designers and other jobs of that nature. The labors will probably be the last to go.

It will be interesting to watch how the dynamics and priorities shift at companies as they start to realize this.
 
Last edited:
As a psychologist I do think that some fears about AI are overblown, I don't see it happening that AI will somehow manifest consciousness and start setting its own goals. Human motivation is strongly linked to the fact we are embodied beings and our bodies have needs which determine what psychological drives we have. For example, humans get curious which prompts us to explore our environment, and that in turn is advantageous to our survival. An AI can't be curious. It can be told to gather information, but there's no way that I can see that an AI will start gathering information simply because it wants to know or is curious. So a terminator-like skynet that will decide humans are to be exterminated of it's own accord, is pretty much out of the question. However, an AI can be pre-programmed with specific goals that could set it up (even inadvertently) to reach a similar conclusion. If you program a goal like self-preservation into an AI, depending on how you operationalize that, it could reach a point where it determines that humans are a threat to its continued existence. This is the big danger, not so much where AI will 'evolve' to, rather what problems it will be directed at and how those problems are framed.

For my own job I'm not super fearful to be replaced, as I work with psychiatric patients and I don't see an effective FreudGPT appearing just yet
I don't think there is much difference between embodiment in virtual reality and embodiment in the real world. A brain in a vat would be subjectively as embodied as one with an actual physical body. As for curiosity, we have to remember that so far it seems physics is computable, and as a physical system the brain too is computable. All information points to the brain being a type of information processor or computer, so its functions should be replicable in artificial machines.
Mechanisms like UBI or rations would work in that case because some corporation or industrial concern will want to make a profit supplying necessities. Inequality will probably increase but does it really matter what the rich are doing if we get to a point where everybody can live a long and healthy life without working?
The big problem, is that enough technology and power and you can rival or surpass the government itself. The basis of the legal system, that which upholds the law, is the monopoly on the greatest power, if a wealthy individual surpasses the military strength of the government they essentially gain control of the law, and basically they can be lawless if they desire.
Because that is something that I sincerely doubt, you need a body to feel pain.
even direct brain stimulation can cause pain. the signals that arrive from the senses are digital all or none action potentials, not too different from what you'd get with a virtual body if properly connected.
Already chat bots are replacing call centre workers. I see this where I work.
and when the chat bot fails and you call for human help they transfer you to third world workers who barely understand english, and are paid pennies on the dollar
Even if you can upload whatever into your brain, you still need people to do the job.
someone has to do the job, but as ai advances the rate of progress also does too. Which includes progress on robotics, and once robotics gets advanced enough together with ai it is bye bye for tradeworkers
My main concern is the CEO's who can remain at the top, while everyone else has to accept whatever income they can get - but history has shown that if our "overlords" are not taking good care of us, their heads end up on a stick.
The problem is you can automate the private security and even create automated militaries. Also it may be the owner class that gives commands as even CEO positions may be automated eventually
Still investors have confidence, Uber is allowed to exist while losing so far close to 30 billion dollars because the day self driving hits the streets uber will go meteoric.
The problem with Uber is that Tesla has far far more data and is likely vastly ahead in self driving research. I think part of the problem is the amount of memory of current systems in cars, they can likely only house very small neural networks. Once they get more compute and memory they will be able to do far far better.
That's basically how it works today, it looks at other code humans already wrote and it puts it together to answer your question
Yes it does have problems with entirely novel challenges. But it can extrapolate to an extent, and it is believed future versions will be able to extrapolate further. We can already see in the image generation models how they're able to achieve novel output combinations in interesting ways.
But I do wanna point back a bit to the core problem here, and that's consciousness. Literally everything we perceive, experience, happens in the realm of consciousness which is an immaterial realm - you can't grab or exchange a piece of your consciousness directly. How and when exactly physical processes give rise to this immaterial phenomenon of consciousness is not fully understood, this is the so-called "hard problem" of consciousness.
I think the nature of consciousness likely has to do with the nature of information itself and symbolic representation or encoding. The brain takes the sensory input and encodes it in the sparse activity of neural networks, these represent or symbolize the external world, despite the activity of neurons having neither sound nor color somehow their merely representing or symbolizing what's outside is resulting in conscious sensation.
I'm graduated in economics, have a post grad, 10 years of work experience and I'm dreading this shit. 'Cause I know I'll have to keep spending my savings in more and more education to keep me afloat.
I doubt staying educated will allow you to keep up. I think a FIRE(financial independence retire early) strategy may be more sensible. Given how fast we are going.
No AI is going to be able to repair people's HVACs or replace their flooring or build their house for the foreseeable future.
Depends on what you mean by foreseeable future, if you mean years, sure unlikely but in a few decades at most those jobs are likely gone.
 
Last edited:

QSD

Member
I don't think there is much difference between embodiment in virtual reality and embodiment in the real world. A brain in a vat would be subjectively as embodied as one with an actual physical body. As for curiosity, we have to remember that so far it seems physics is computable, and as a physical system the brain too is computable. All information points to the brain being a type of information processor or computer, so its functions should be replicable in artificial machines.
even direct brain stimulation can cause pain. the signals that arrive from the senses are digital all or none action potentials, not too different from what you'd get with a virtual body if properly connected.
It isn't quite as easy as 'human experience is computable'. It might be, but as yet we are not sure. You have to remember that the information in your brain is not just zeros and ones, it is electrical impulses dancing on tapestry of qualitatively different neurons. Some elicit pain, for example. Pain is an irreducable experience. You wouldn't be able to explain it in any other terms to a person or robot who has never felt it. You need pain receptors and congruent neurons in the brain to feel pain, not just any transistor will do. So far we don't know how to build digital equivalents of pain receptors and it may take a long time before we do. The same thing goes for all our other sensations/qualia. They are qualitative phenomena, irreducible to merely ones and zeros because we don't fully understand the hardware they are firing into. So a 'brain in a vat' may work (debatable to what extent) but you can't just replace that brain with a CPU, as we don't yet have the technology and are not very close to getting it AFAIK.

I think the nature of consciousness likely has to do with the nature of information itself and symbolic representation or encoding. The brain takes the sensory input and encodes it in the sparse activity of neural networks, these represent or symbolize the external world, despite the activity of neurons having neither sound nor color somehow their merely representing or symbolizing what's outside is resulting in conscious sensation.

So it's 3:00 am and this is way above my paygrade so I googled it and this 2018 article in Nature came up:
What Is Consciousness
I think it bears out a bunch of what I said, maybe not everything, so make of it what you will.
at least one of the working theories predicts that
IIT also predicts that a sophisticated simulation of a human brain running on a digital computer cannot be conscious—even if it can speak in a manner indistinguishable from a human being. Just as simulating the massive gravitational attraction of a black hole does not actually deform spacetime around the computer implementing the astrophysical code, programming for consciousness will never create a conscious computer. Consciousness cannot be computed: it must be built into the structure of the system.
Two challenges lie ahead. One is to use the increasingly refined tools at our disposal to observe and probe the vast coalitions of highly heterogeneous neurons making up the brain to further delineate the neuronal footprints of consciousness. This effort will take decades, given the byzantine complexity of the central nervous system. The other is to verify or falsify the two, currently dominant, theories. Or, perhaps, to construct a better theory out of fragments of these two that will satisfactorily explain the central puzzle of our existence: how a three-pound organ with the consistency of tofu exudes the feeling of life.
 

Hugare

Member
Just so you guys know: I've been working with ChatGPT for the past 2-3h on a Python project to extract all pages from a .pdf file and rename each one of them using information from those files

This is a task that a tech intern in my company have been working for months now, without success. So I said "fuck it, why not giving it to GPT?"

And it's done. It's 100% working, better than I hoped for.

And I know shit about programming. Had to Google a bit, downloaded Python and some libs, ran IDLE a few times to find bugs, but that was it.

This thing will make life so hard for Jr programmers ...
 
Last edited:

Tams

Gold Member
Just so you guys know: I've been working with ChatGPT for the past 2-3h on a Python project to extract all pages from a .pdf file and rename each one of them using information from those files

This is a task that a tech intern in my company have been working for months now, without success. So I said "fuck it, why not giving it to GPT?"

And it's done. It's 100% working, better than I hoped for.

And I know shit about programming. Had to Google a bit, downloaded Python and some libs, ran IDLE a few times to find bugs, but that was it.

This thing will make life so hard for Jr programmers ...

And not have to go on to Stack Overflow, were it's often either someone telling you to learn to code and only giving you hints, or some smartarse helping but using some obtuse methods in order to show off.

Unsurprisingly, they're having a bit of breakdown/hate wankfest over ChatGPT over there. It's been temporarily 'banned' there.

On one hand, it does mean junior programmers won't have to wade as much through shite like that, but I doubt that'll be much consolation when* it makes most of them obsolete.

*Which it pretty much already has; it's just not been long enough for the consequences to fully show.

Great if you're a hobbyist who just benefits from being able to get code written for free for your projects.
 
Last edited:

Hugare

Member
And not have to go on to Stack Overflow, were it's often either someone telling you to learn to code and only giving you hints, or some smartarse helping but using some obtuse methods in order to show off.

Unsurprisingly, they're having a bit of breakdown/hate wankfest over ChatGPT over there. It's been temporarily 'banned' there.

On one hand, it does mean junior programmers won't have to wade as much through shite like that, but I don't that'll be much consolation when* it makes most of them obsolete.

*Which it pretty much already has; it's just not been long enough for the consequences to fully show.

Great if you're a hobbyist who just benefits from being able to get code written for free for your projects.
It was pretty insane that I was able to do it without any previous knowledge with programming

Most of my time was spent asking ChatGPT to add something or subtract something. "Ok, now add to the code that I dont want numbers on the filenames". Done.

And when something went wrong ,all I had to do was use IDLE and copy/paste the result to ChatGPT. He brought me the solution instantly. "Ok, so add a "." when renaming identical files". Fixed.

I have to say, it was really fun. When everything finaly worked perfectly, the feeling was great.

It's stupid to say it considering the situation, but I feel more motivated than ever to learn how to code. Tomorrow I''ll be using the "odin project" to start learning python.

Dont have the intention to become a programmer, but as a hobby, it seems fun.
 

Tams

Gold Member
It was pretty insane that I was able to do it without any previous knowledge with programming

Most of my time was spent asking ChatGPT to add something or subtract something. "Ok, now add to the code that I dont want numbers on the filenames". Done.

And when something went wrong ,all I had to do was use IDLE and copy/paste the result to ChatGPT. He brought me the solution instantly. "Ok, so add a "." when renaming identical files". Fixed.

I have to say, it was really fun. When everything finaly worked perfectly, the feeling was great.

It's stupid to say it considering the situation, but I feel more motivated than ever to learn how to code. Tomorrow I''ll be using the "odin project" to start learning python.

Dont have the intention to become a programmer, but as a hobby, it seems fun.

Yeah, I started and 'finished' (still more I want to do) a classroom tool I'd been pondering for a few years now. I couldn't get myself through the coding courses though; they were all full of doing stuff that's important to learn but on projects I didn't care about.

Along came ChatGPT, and I had something workable in a weekend. And it was fun. I even got some features in that I'd abandoned hope for as they are a bit niche and obtuse.
 
Last edited:
It isn't quite as easy as 'human experience is computable'. It might be, but as yet we are not sure. You have to remember that the information in your brain is not just zeros and ones, it is electrical impulses dancing on tapestry of qualitatively different neurons. Some elicit pain, for example. Pain is an irreducable experience. You wouldn't be able to explain it in any other terms to a person or robot who has never felt it. You need pain receptors and congruent neurons in the brain to feel pain, not just any transistor will do. So far we don't know how to build digital equivalents of pain receptors and it may take a long time before we do. The same thing goes for all our other sensations/qualia. They are qualitative phenomena, irreducible to merely ones and zeros because we don't fully understand the hardware they are firing into. So a 'brain in a vat' may work (debatable to what extent) but you can't just replace that brain with a CPU, as we don't yet have the technology and are not very close to getting it AFAIK.

Physically the signal or action potential transmitted from a pain receptor is no different from that transmitted from a taste receptor or a photoreceptor cell in vision. It has the same ions, it has the same nature, it is identical. What differentiates pain from say vision is where the wiring goes and how the signal is handled or represented by the brain.

Again it is the symbolic encoding, or representational encoding that somehow corresponds to qualia. It is likely consciousness is an intrinsic property of certain states of information or representation.

So it's 3:00 am and this is way above my paygrade so I googled it and this 2018 article in Nature came up:
What Is Consciousness
I think it bears out a bunch of what I said, maybe not everything, so make of it what you will.
at least one of the working theories predicts that

I disagree strongly with IIT. Whilst its idea is sound, the dismissal of digital devices is a nonstarter. It is based on the notion that the brain has physical wires connecting the neurons, whilst within a computer a NN's wiring is virtual. But it ignores the fact that that virtual connectivity is manifested physically and causally in the computer and is of the same nature as the physical wires in a brain.

Comparing it to a black hole's gravity is ridiculous, the brain neither produces sound nor light nor smell, it only produces electrical signals, similar to a computer. And all things point to its nature being information processing. Now unlike some people that say the consciousness lies in the processing, I believe that consciousness resides not in the processing but in the information or representational states. Get the right states and you have consciousness.
 
Last edited:
For regular people, I can not explain what a tool this is. I teach part time english. Now, instead of googling the shit out of the internet to try and find more fitting exercises for students....all I do is use ChatGPT....make a 25 question past perfect exercise for 8th graders, make the topic about global warming. 15 seconds later DONE. Or, i have no inspiration for a lesson plan and other peoples is behind a paywall, no problem. I type " make a lesson plan for a 2 hour class about segregation in the early 50s"....done XD Its amazing. Im sure jsut as Midjourney, this will go behind a paywall because its insane the results it makes.

Another example; Im currently watching a Korean show called Signal so I asked to make a plot about the show if it was made for US audience and what actors would it use. To the T, every single detail, from physical appearance to emotional expression, what differences would the US show have to have compared to Korean etc, the AI made a script.
 
Last edited:

01011001

Banned
I tried if the bot understands humor and punchlines of jokes

why is the following joke funny? My wife told me she'll break my keyboard if I don't get off the computer. I'm not too worried, I think she's jokinlkjhfakljn m,.nbziyoao78yv87dfaoyuofaytdf

The joke is supposed to be funny because it plays on the idea that the speaker's wife is frustrated with him spending too much time on the computer and is making a playful threat to damage it as a way to get him to stop. The sudden shift in the end to nonsensical letters and numbers adds an element of absurdity and surprise.

______________________________________________

why is this joke funny?

A man walks into a magic forest and tries to cut down a talking tree. "You can't cut me down," the tree complains. "I'm a talking tree!" The man responds, "You may be a talking tree, but you will dialogue."

The joke is funny because it uses word play to create a pun. The tree says "I'm a talking tree!" and the man responds "You may be a talking tree, but you will dialogue." This plays on the multiple meanings of the word "dialogue," which can mean a conversation or discussion, as well as a verb meaning "to cut down." The punchline is unexpected and creates a humorous contrast between the tree's ability to talk and the man's intention to cut it down.
 
Last edited:

PSYGN

Member
I tried if the bot understands humor and punchlines of jokes

why is the following joke funny? My wife told me she'll break my keyboard if I don't get off the computer. I'm not too worried, I think she's jokinlkjhfakljn m,.nbziyoao78yv87dfaoyuofaytdf

The joke is supposed to be funny because it plays on the idea that the speaker's wife is frustrated with him spending too much time on the computer and is making a playful threat to damage it as a way to get him to stop. The sudden shift in the end to nonsensical letters and numbers adds an element of absurdity and surprise.

______________________________________________

why is this joke funny?

A man walks into a magic forest and tries to cut down a talking tree. "You can't cut me down," the tree complains. "I'm a talking tree!" The man responds, "You may be a talking tree, but you will dialogue."


The joke is funny because it uses word play to create a pun. The tree says "I'm a talking tree!" and the man responds "You may be a talking tree, but you will dialogue." This plays on the multiple meanings of the word "dialogue," which can mean a conversation or discussion, as well as a verb meaning "to cut down." The punchline is unexpected and creates a humorous contrast between the tree's ability to talk and the man's intention to cut it down.

The sudden shift in the end to nonsensical letters and numbers adds an element of absurdity and surprise.

I mean, I guess you're not technically wrong... but you're not right.

The second one is impressive but I wonder if it figured that out by synthesizing all the data online (e.g. comments with foreigners that might not get it asking for explanation).

--

Why did I read the AI's answers with Data's voice

star trek data GIF by HULU
 
Last edited:

Hugare

Member
Made another script yesterday that looks for all 175 .pdf files inside one folder and extract 47 digits from each one to an Excel file. In literally minutes, without any knowledge in programing.

My boss was so impressed that he wrote a message to the whole team saying how awesome that was and how our work would be improved by it.

Sorry programers, but this thing is amazing.
 

QSD

Member
Physically the signal or action potential transmitted from a pain receptor is no different from that transmitted from a taste receptor or a photoreceptor cell in vision. It has the same ions, it has the same nature, it is identical. What differentiates pain from say vision is where the wiring goes and how the signal is handled or represented by the brain.
Yes, that's kind of what I was saying... the electrical impulses that make up the signal are the same, but the hardware they are firing into is not well understood yet. The Nature piece I quoted mentions it might take decades of research before we are there.
Again it is the symbolic encoding, or representational encoding that somehow corresponds to qualia. It is likely consciousness is an intrinsic property of certain states of information or representation.
hmmm, doesn't a working definition of consciousness as we know it also require some form of moment-to-moment awareness? If the consciousness were in the information/representation only, someone who is in deep sleep or under anesthesia would also be considered conscious, because the information they would normally have access to is still represented in the brain, ready to be accessed again upon waking. But perhaps I misunderstand you?
I disagree strongly with IIT. Whilst its idea is sound, the dismissal of digital devices is a nonstarter. It is based on the notion that the brain has physical wires connecting the neurons, whilst within a computer a NN's wiring is virtual. But it ignores the fact that that virtual connectivity is manifested physically and causally in the computer and is of the same nature as the physical wires in a brain.

Comparing it to a black hole's gravity is ridiculous, the brain neither produces sound nor light nor smell, it only produces electrical signals, similar to a computer. And all things point to its nature being information processing. Now unlike some people that say the consciousness lies in the processing, I believe that consciousness resides not in the processing but in the information or representational states. Get the right states and you have consciousness.
It's sort of strange to claim the brain doesn't produce sound/smell/light etc - what else is there that would produce these sensations in your consciousness but the brain? The point about the black hole is valid to my intuition, even though you are right and the virtual wiring still has physical analogues, it doesn't mean that the computer model actually manifests the physical phenomenon it's simulating. I guess we'll agree to disagree. I do get the impression that the idea of virtual consciousness is one that is very appealing to you, hence you enthusiasm for defending it.
 

sinnergy

Member
I let it write a blog page in html with correct use of sections and article and figure tags 🤣 amazing . It was correct .Can it code interactive maps? Menus , next test
 
Last edited:

Sonik

Member
This feels like the early days of the internet all over again. Where people are excited and know it's going to be big, but most don't realize how much it's going to change the world yet.

I can guess. Are you aware of the Dead Internet Theory? That's what ChatGPT is going to bring. In just a few years it'll be astroturfing bots, propagandist bots and control the narrative bots talking to each other while being supported by an army of voting bots. All organic discussion is going to be limited to a few small communities while social media will be fully automated propaganda machines. Reddit and twitter are already full of astroturfers and bots, ChatGPT is just going to make them worse on a whole new level.
 
Last edited:
Yes, that's kind of what I was saying... the electrical impulses that make up the signal are the same, but the hardware they are firing into is not well understood yet. The Nature piece I quoted mentions it might take decades of research before we are there.

hmmm, doesn't a working definition of consciousness as we know it also require some form of moment-to-moment awareness? If the consciousness were in the information/representation only, someone who is in deep sleep or under anesthesia would also be considered conscious, because the information they would normally have access to is still represented in the brain, ready to be accessed again upon waking. But perhaps I misunderstand you?

There is a difference between the information represented by brain activity and that stored in synapses

It's sort of strange to claim the brain doesn't produce sound/smell/light etc - what else is there that would produce these sensations in your consciousness but the brain? The point about the black hole is valid to my intuition, even though you are right and the virtual wiring still has physical analogues, it doesn't mean that the computer model actually manifests the physical phenomenon it's simulating. I guess we'll agree to disagree. I do get the impression that the idea of virtual consciousness is one that is very appealing to you, hence you enthusiasm for defending it.
Within the brain the sounds smell vision is not physically there. All that exists is the electrical activity of neurons and this codes for information. My claim is that information is what contains the sound smell vision etc.
 

Aesius

Member
I can guess. Are you aware of the Dead Internet Theory? That's what ChatGPT is going to bring. In just a few years it'll be astroturfing bots, propagandist bots and control the narrative bots talking to each other while being supported by an army of voting bots. All organic discussion is going to be limited to a few small communities while social media will be fully automated propaganda machines. Reddit and twitter are already full of astroturfers and bots, ChatGPT is just going to make them worse on a whole new level.
Honestly...I think this may be for the best for humanity. The internet isn't going anywhere, but it would be amazing if we started using it more the way we did in the 1990s. For specific purposes and at specific times, instead of just endlessly scrolling social media (which, as you said, is heavily manipulated via bots).

But who knows. There are already so many bots out there and people don't seem to care. Maybe we will get to a point where you can simulate being a popular influencer by having ChatGPT create legions of followers to comment on your posts and YouTube videos.
 

GladiusFrog

Member
I've been losing sleep over this, I can't believe how far Machine learning has come.

Not to long from now we will be able to create content on demand, entire films and probably even games... it will be as simple as asking the AI to make Skyrim the movie staring whatever actors you like and in the style of any director you want.

Looking into the future, its exciting and also pretty damn terrifying to me.

Imagine countries using Machine learning to make strategic decisions, This whole thing is coming at humanity so fast I'm not sure we can keep up with the implications.

The Genie is out and running wild.

jaffar-power.gif
 
Last edited:

Sonik

Member
Honestly...I think this may be for the best for humanity. The internet isn't going anywhere, but it would be amazing if we started using it more the way we did in the 1990s. For specific purposes and at specific times, instead of just endlessly scrolling social media (which, as you said, is heavily manipulated via bots).

But who knows. There are already so many bots out there and people don't seem to care. Maybe we will get to a point where you can simulate being a popular influencer by having ChatGPT create legions of followers to comment on your posts and YouTube videos.

You're optimistic, personally I think humanity is so dumb that it will continue to consume social media even after they have become fully automated and under complete control of the corporations and the government, afterall at least half of them think and behave like NPCs already and are often indistinguishable from astroturfers since they parrot whatever they are told. So to summarize, we're fucked
 

01011001

Banned
how about 10h of computer generated prog/djent?



this isn't even using an ai really, this is just a clever procedural generation algorithm.
imagine what an advanced AI can do lol

"The Hidden Disturbance" (the one I timestamped) is fire
 
Last edited:

Yoda

Member
Some of you are really over-reacting. Big tech cos have similar models (that work nearly as well) which simply aren't released. One (of many reasons) is due to the fact they'll confidently spurt out bullshit if you press them, see: https://twitter.com/DavidDeutschOxf (you'll find OpenAI's CEO responding to a few of his threads). This tech is certainly promising, but it mass replacing entire sectors anytime soon isn't realistic at all. A good analogy is self-driving cars. There are instances you can get into a Waymo car or even a Tesla with FSD and everything goes perfectly, but there's STILL extremely hard to crack edge-cases despite the endless budgets (Waymo, Google's self driving unit is > a decade old if you count the time it was part of Google X). You'll note no one is seriously trying to piece-meal self-driving tech (at scale). For example, having the truck drive itself on the highway -> have a human finish last mile. Why? permanently screwing the truck driver labor supply can't (easily) be undone. A truck with millions of dollars of goods which is late will screw up inventory tracking. Buying new equipment to make the truck drive on its own is expensive. The companies which offer the FSD tech will price it aggressively given how much money was sunk into it to date -> thus eliminating the savings from firing the truck drivers. Those are just a few, I'm sure the bean counters have a bunch of other reasons holding them back as well. So what would be an obvious use case? Probably Google Seach; I'd wager Google has the exact same capabilities internally, but given a chatGPT query costs multiple orders of magnitude more than a regular search query, there's no reason to upend their main profit center, esp if it'll strain the biz relationships they have with adsense/SEO if they did do major changes to search. Perhaps M$ adding it to Bing will force their hand, but I'd never bet money on M$ out-competing Google on a pure tech play.

If this replaces anything significantly enough to change the labor pools for certain sectors. It'd most likely be:
* customer service (this is already heavily automated by bots, you need to convince execs the bot/GPT can make decisions which result in a monetary transactions).
* copy writers. It'd mainly be a tool to assist agencies which help out companies that don't have giant budgets (think startups).

TBH that's it. I see lots of people saying junior devs, but I don't think so in a significant sense. As I've worked in this industry for awhile, I have a few opinions:
* For BigTech devs, I think it'd mainly be a productivity tool for some boilerplate-y style tasks, but managing complex existing systems? Not a chance.
* I think tasks like: Generate unit tests against all public methods in X source file would be a great fit or instrumenting existing a stack with the required boilerplate for a latency profiler. But most even trivial tasks at <insert BigTech co> require some form of communication to other teams, understanding their needs/requirements, etc... and chatGPT (in it's current form) will never replace that.
* Startups only recently started hiring junior devs (unless you had multiple internships and/or came from a stellar CS program). Given the economy, I suspect they'll go back to senior devs only. This'll make said seniors more productive to help bootstrap new systems, but it can't replace them. So perhaps there'll be a X increase in senior dev productivity which will stop them from spawning work that'd normally be taken up by someone junior. But I think it's a much smaller impact that a lot of the sensational headlines are claiming.
 

Chaplain

Member
This feels like the early days of the internet all over again. Where people are excited and know it's going to be big, but most don't realize how much it's going to change the world yet.

True. However, there are some who already see where this ultimately goes:

"The new religion of artificial intelligence is called Way of the Future. It represents an unlikely next act for the Silicon Valley robotics wunderkind at the center of a high-stakes legal battle between Uber and Waymo, Alphabet’s autonomous-vehicle company. Papers filed with the Internal Revenue Service in May name Levandowski as the leader (or “Dean”) of the new religion, as well as CEO of the nonprofit corporation formed to run it. The documents state that WOTF’s activities will focus on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” That includes funding research to help create the divine AI itself. The religion will seek to build working relationships with AI industry leaders and create a membership through community outreach, initially targeting AI professionals and “laypersons who are interested in the worship of a Godhead based on AI.” The filings also say that the church “plans to conduct workshops and educational programs throughout the San Francisco/Bay Area beginning this year ... But WOTF differs in one key way to established churches, says Levandowski: “There are many ways people think of God, and thousands of flavors of Christianity, Judaism, Islam…but they’re always looking at something that’s not measurable or you can’t really see or control. This time it’s different. This time you will be able to talk to God, literally, and know that it’s listening.” I ask if he worries that believers from more traditional faiths might find his project blasphemous. “There are probably going to be some people that will be upset,” he acknowledges. “It seems like everything I do, people get upset about, and I expect this to be no exception. This is a radical new idea that’s pretty scary, and evidence has shown that people who pursue radical ideas don’t always get received well. At some point, maybe there’s enough persecution that [WOTF] justifies having its own country.” (11.15.17)

Source: https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/
 
but there's STILL extremely hard to crack edge-cases despite the endless budgets
endless budgets can't fix the fact the onboard memory is very limited, you have NNs smaller than rodent NNs trying to drive cars. Give it more onboard memory and compute for larger NNs and you'll see how fast these cases are cracked.
 

Lasha

Member
endless budgets can't fix the fact the onboard memory is very limited, you have NNs smaller than rodent NNs trying to drive cars. Give it more onboard memory and compute for larger NNs and you'll see how fast these cases are cracked.

Some of the edge cases may be unsolvable regardless of compute power. Or at least unsolvable within the time needed to make a life or death traffic decision. Especially on vehicles like Tesla which keep stripping sensors to increase margins.
 

Tams

Gold Member
I can guess. Are you aware of the Dead Internet Theory? That's what ChatGPT is going to bring. In just a few years it'll be astroturfing bots, propagandist bots and control the narrative bots talking to each other while being supported by an army of voting bots. All organic discussion is going to be limited to a few small communities while social media will be fully automated propaganda machines. Reddit and twitter are already full of astroturfers and bots, ChatGPT is just going to make them worse on a whole new level.
I suspect that eventually the only online places to have a human-to-human discussion will be on forums which require some sort of verification with an ID.

Government issued IDs won't be popular or wanted (but still used), so I suspect some sort of blockchain verification that is anonymous. Somehow this would in the real world have to be only obtainable in person to verify that you aren't a computer.
 

Romulus

Member
My wife is a full time author. She and her peers have been messing with chatgpt for a couple of weeks and say it's terrifying at this early stage how well it can create a story. Not a good sign for job security on that front.
 
Last edited:
This thing will make life so hard for Jr programmers ...
I completely agree with this, but will it do the same for experienced software engineers that do more than simply code? I don't think so as those people will be pushing the actual ChatGPT software forward, unless of course it self automates.
 

Tams

Gold Member
I completely agree with this, but will it do the same for experienced software engineers that do more than simply code? I don't think so as those people will be pushing the actual ChatGPT software forward, unless of course it self automates.

Yes, but the need for software engineers in general will decrease and if there are fewer junior ones making it through, that means the pool to choose from will be smaller.

Now, you could argue that only the best and most naturally adept programmers are likely to make it in the industry, so the recruitment pool for senior software engineers will be smaller but of higher quality.

And while companies will certainly reduce their workforces, at the same time they will probably expect more productivity from their employees if they are using AI to assist them.
 
Yes, but the need for software engineers in general will decrease and if there are fewer junior ones making it through, that means the pool to choose from will be smaller.

Now, you could argue that only the best and most naturally adept programmers are likely to make it in the industry, so the recruitment pool for senior software engineers will be smaller but of higher quality.

And while companies will certainly reduce their workforces, at the same time they will probably expect more productivity from their employees if they are using AI to assist them.
That's exactly the same point I was making. Hugare on the other hand made it as though all programmers were about to be completely wiped out due to this thing. Junior or intermediate ones are about to feel the burn, those at the top will be fine, though they will feel a slight change as well as you noted.
 

Toots

Gold Member
I think the biggest problem of AI are their creators. They don't have drive so they do what we tell them and if it turns out shitty it is our fault i guess.

I tried whatever chat ai someone posted on the forum.
It was magnigficent for a few hours, and then you start seeing the seams and the illusion of intelligence fall apart relatively fast.

Plus i wanted to create a kenshiro avatar so i could chat with him. I copied a few wiki pages of important characters and info dumped the poor ai with huge walls of text. My understanding was the ai gets the info, it stores it up wherever it wants, and then after when i ask it questions about thoses characters it should be able to access the info previously dumped on it and answer me.

I tried but the goddam ai always answered wrong, as if i didn't info dumped every info they needed to answer just before....

I wanted to create a Kenshiro personal assistant who could answer every question i had about the Hokuto no Ken universe, but as of now we seem far from being able to do that...
 

Hugare

Member
That's exactly the same point I was making. Hugare on the other hand made it as though all programmers were about to be completely wiped out due to this thing. Junior or intermediate ones are about to feel the burn, those at the top will be fine, though they will feel a slight change as well as you noted.
"This thing will make life so hard for Jr programmers ..." ≠ all programmers are gonna be completely wiped out

Not the same thing

What Tams Tams said is exectly what I meant by that statement. Also your post.
 
"This thing will make life so hard for Jr programmers ..." ≠ all programmers are gonna be completely wiped out

Not the same thing

What Tams Tams said is exectly what I meant by that statement. Also your post.
Sorry programers, but this thing is amazing.
This statement made it out as though they were all doomed. Though I agree with you on the whole, junior programmers are going to have to demonstrate their worth to the company or they'll be back on the jobhunt.
 

Hugare

Member
This statement made it out as though they were all doomed. Though I agree with you on the whole, junior programmers are going to have to demonstrate their worth to the company or they'll be back on the jobhunt.
Again, not what I really meant by that, which was "sorry programmers, but your life is gonna be harder from now on" and not "you guys are going to be decimated"

"Demonstrate their worth to the company" is a tough pill to swallow. Because many people already work pretty hard, but that wont be enough.

We will eventually reach a limit where "study harder" wont be enough. Like, can everyone afford to have a graduation, post grad, MBA, speak 4 different languages and etc?

When the requirements become absurd, what then? What are we going to do? Capitlaism doesnt care if people arent finding jobs because now they have AIs to work for them.
 
I tried but the goddam ai always answered wrong, as if i didn't info dumped every info they needed to answer just before....
Yeah the problem is it has limited short term memory, unless it is trained to get the Hokuto no Ken information into its longterm memory it won't work.

It is possible to do that, but either the people in charge would have to do it or give access to the model for training.
 

Tams

Gold Member
Again, not what I really meant by that, which was "sorry programmers, but your life is gonna be harder from now on" and not "you guys are going to be decimated"

"Demonstrate their worth to the company" is a tough pill to swallow. Because many people already work pretty hard, but that wont be enough.

We will eventually reach a limit where "study harder" wont be enough. Like, can everyone afford to have a graduation, post grad, MBA, speak 4 different languages and etc?

When the requirements become absurd, what then? What are we going to do? Capitlaism doesnt care if people arent finding jobs because now they have AIs to work for them.

Technically correct, probably even an underestimate. They might well be 'trēsimated' (yeah, I made that up) in the not too distant future.
 

StreetsofBeige

Gold Member
I dont know how widespread this is among students, but if it catches on and schools cant figure out how to defend it, get ready for decades worth of idiot students. Anything they can do at home using AI will get great marks.

Anyone care to tell how students do tests now? Is it tech based? If so, get ready for cheats. But if it's still pen and paper kinds of tests where the kid has to do it themselves then that can solve it. And dont bell curve. If they suck they suck.
 

Tams

Gold Member
I dont know how widespread this is among students, but if it catches on and schools cant figure out how to defend it, get ready for decades worth of idiot students. Anything they can do at home using AI will get great marks.

Anyone care to tell how students do tests now? Is it tech based? If so, get ready for cheats. But if it's still pen and paper kinds of tests where the kid has to do it themselves then that can solve it. And dont bell curve. If they suck they suck.
Schools around the world are already phasing out essay homework. Not just because of AI (in the UK it's been a weird nostalgia for the past).

And the Nordic countries haven't focussed on homework at all for years (some pretty much don't issue any).

Schools are definitely aware if it. But it's not that much different from before. Such low level essays can be put out by an adult in next to no time.

Homework will mostly focus on proof of having studied. That could be handwritten (which would allow for some essays - even if copied from an AI, it's still work) or increasingly online tests (which lighten the marking load on teachers a lot).

It's a shame. I liked essay homework/coursework. But it's time has come and gone.

Universities face a bigger problem with all the coursework they have. They'll push for more tests, but I reckon they'll face significant pushback from students who have already been tested to hell and back at school.
 

64bitmodels

Reverse groomer.
That's where UBI comes in. Though much more research needs to be done, what has been done, if I recall correctly, showed no real difference to existing social benefits systems apart from removing the requirement for people to work.
UBI seems like the kind of thing that leads to hyperinflation, IMO. It's introduced... then companies raise their prices of every product they have, some above the price of what you get paid monthly. If UBI exists that kitkat goes from 2 dollars (which is already overpriced mind you) to 500. imagine the price of a PS5 under a system like that. Then people are gonna start asking for more money, we get more cash every month, and then more hyperinflation. eventually government regulations are gonna need to be made

personally the real question comes from what people are going to do when work isn't necessary for 99% of the population. and more importantly if the AI will grow sentience and we'll be serving under them
 
Last edited:

Tams

Gold Member
UBI seems like the kind of thing that leads to hyperinflation, IMO. It's introduced... then companies raise their prices of every product they have, some above the price of what you get paid monthly. If UBI exists that kitkat goes from 2 dollars (which is already overpriced mind you) to 500. imagine the price of a PS5 under a system like that. Then people are gonna start asking for more money, we get more cash every month, and then more hyperinflation. eventually government regulations are gonna need to be made

personally the real question comes from what people are going to do when work isn't necessary for 99% of the population. and more importantly if the AI will grow sentience and we'll be serving under them
While a valid concern, your comment is full of hyperbole.

And you haven't provided even a hint of a solution... at least UBI is an atrempt to solve it.
 
Top Bottom