The article is meh, but what is said in there is pretty much true.
Unregulated technology is something we have now : ai generation for example.
What is happening right now with the ai generation be text or image, is that they are the most privacy invasive thing there could be.
For example chatgpt/Bing chat, or Google bard. They take every bit of text, analyse it and use it for future prompts.
User do not know because it’s too long to read their huge privacy policy.
And so people are puting in those prompts private elements, trade secrets, and elements they should not put in there.
Image generation is currently a huge copyright issue.
Worldcoin deployed balls, installed like art in multiple cities, with a camera to scan the eyeballs (and so their identity, as it seems to be unique for each person) of every person who just gazes into that lens. https://news.artnet.com/art-world/worldcoin-orb-ai-2341500
So non regulated tech is a huge anti-people / pro money mess.
For brain interfaces it may even be worse. Companies could just put ads in your brain directly, and everything that was described in that article.
I saw a little part of a movie some long time ago, no idea what it was.
There was some brain interface allowing communication and displaying images.
When just enabled it was a huge mess adverts everywhere, noise ads, so bad that it would make that character unable to think and know what he wanted to do.
In order to get back into what he had to do he had to use dampener to remove all those ads and noise, all the mess.
And the first part with so much noise that we cannot do anything from our own will is what will happen for non regulated brain interface after enough people have adopted it.
And we may not have such brain “ad blocking” tech as in the movie.
The data broker industry is a far better example of the dangers of unregulated tech than ai generated works — which is more a rebuke of copyright and a threat to labor than a privacy invasion.
It may be even worse as you said, however AI currently is more present in the news and maybe easier to understand because of this.
Also chatgpt had a huge amount of personal info leaked to the dark net, not really because they got hacked, but because the users put their login credentials into fishing websites.
But also, as any thing you input into chatgpt/Bing chat/bard is scanned, it can also be a big antitrust/corporate espionage as openai/microsoft and Google may be able to spy on any users who may leak the development of another AI.
" ai generation be text or image, is that they are the most privacy invasive thing there could be."
ai being used by advertisers and social media are far more of an invasion of privacy, and have been for a decade.
the focus on LLMs and art generators are silly.
habit prediction and manipulation is a far greater risk. the socio-economic system is also the bigger reason to worry about any technological improvement than the technology itself.
the rhetoric keeps focusing on LLMs and image generation, which is silly to me.
the problem of giving private information as if it is safe is also the issue, not the LLMs. that’s been an issue for ages, and there has been nothing done to fix it, which is why it’s still an issue. LLMs or not, we should be educating for these problems in school. also critical thinking and media bias/manipulation studies. everyone would benefit with a more statistical understanding of information.
“so much noise that we cannot do anything from our own will is what will happen for non regulated brain interface after enough people have adopted it.”
this is already the social paradigm with propaganda and “media” encourage by those with the money to do so. they don’t need to literally see inside of your head and change things when they already know if you’re quitting a brand or in an unsuccessful relationship before you do.
The article is meh, but what is said in there is pretty much true.
Unregulated technology is something we have now : ai generation for example.
What is happening right now with the ai generation be text or image, is that they are the most privacy invasive thing there could be.
For example chatgpt/Bing chat, or Google bard. They take every bit of text, analyse it and use it for future prompts.
User do not know because it’s too long to read their huge privacy policy.
And so people are puting in those prompts private elements, trade secrets, and elements they should not put in there.
Image generation is currently a huge copyright issue.
Worldcoin deployed balls, installed like art in multiple cities, with a camera to scan the eyeballs (and so their identity, as it seems to be unique for each person) of every person who just gazes into that lens. https://news.artnet.com/art-world/worldcoin-orb-ai-2341500
So non regulated tech is a huge anti-people / pro money mess.
For brain interfaces it may even be worse. Companies could just put ads in your brain directly, and everything that was described in that article.
I saw a little part of a movie some long time ago, no idea what it was. There was some brain interface allowing communication and displaying images.
When just enabled it was a huge mess adverts everywhere, noise ads, so bad that it would make that character unable to think and know what he wanted to do.
In order to get back into what he had to do he had to use dampener to remove all those ads and noise, all the mess.
And the first part with so much noise that we cannot do anything from our own will is what will happen for non regulated brain interface after enough people have adopted it.
And we may not have such brain “ad blocking” tech as in the movie.
The data broker industry is a far better example of the dangers of unregulated tech than ai generated works — which is more a rebuke of copyright and a threat to labor than a privacy invasion.
It may be even worse as you said, however AI currently is more present in the news and maybe easier to understand because of this.
Also chatgpt had a huge amount of personal info leaked to the dark net, not really because they got hacked, but because the users put their login credentials into fishing websites.
But also, as any thing you input into chatgpt/Bing chat/bard is scanned, it can also be a big antitrust/corporate espionage as openai/microsoft and Google may be able to spy on any users who may leak the development of another AI.
BCI isn’t a worry quite yet.
" ai generation be text or image, is that they are the most privacy invasive thing there could be." ai being used by advertisers and social media are far more of an invasion of privacy, and have been for a decade.
the focus on LLMs and art generators are silly.
habit prediction and manipulation is a far greater risk. the socio-economic system is also the bigger reason to worry about any technological improvement than the technology itself.
the rhetoric keeps focusing on LLMs and image generation, which is silly to me. the problem of giving private information as if it is safe is also the issue, not the LLMs. that’s been an issue for ages, and there has been nothing done to fix it, which is why it’s still an issue. LLMs or not, we should be educating for these problems in school. also critical thinking and media bias/manipulation studies. everyone would benefit with a more statistical understanding of information.
“so much noise that we cannot do anything from our own will is what will happen for non regulated brain interface after enough people have adopted it.”
this is already the social paradigm with propaganda and “media” encourage by those with the money to do so. they don’t need to literally see inside of your head and change things when they already know if you’re quitting a brand or in an unsuccessful relationship before you do.