Ruben Teijeiro: “We must learn how to work alongside machines”
March 8, 2021
436         0

by Arabella Seebaluck




Who remembers when a chatbot developed by Microsoft went rogue on Twitter in 2016? It went as far as swearing, making racist remarks and inflammatory political statements. With the rapid leaps made in Artificial Intelligence, it is a constant fear that computers could attain a level of superintelligence which would cause some form of a ‘revolution’. (There are also enough Hollywood movies out there to support the claim). However, some scientists don’t believe there is any real danger.

Computers are the product of human intelligence. Ever since their first designs in science labs, office compounds and now in our handbags or pockets, they are conceived and created by men and women. A recurring question in the scientific community is: what if robots suddenly began to auto-design and auto-create?

The scenario imagined is that a super intelligent machine would design one more performant and more advanced. The new one would create yet another and another, like in a sort of twilight zone domino effect. The optimists believe their intelligence could increase exponentially and reaching levels of brainpower unattainable for the human race. This is what is referred to as ‘Singularity’.

It’s a concept that has existed for 5 decades when scientists had just begun experimenting with binary code and the systems that enabled the inception of computing. The wonder that singularity proposes to the scientific community has not changed since. Scientists could not have imagined nanotechnology or immersive reality then, much like their counterparts today are dreaming of superluminal space travel. What is even being thought as achievable is the human brain being prodded by AI circuits to enhance what organic brains are actually capable of doing. What scientists think is possible is even for our brains to survive our bodies. That would lead to AI-enhanced human power with perennial super-intelligence.

The Director of Oxford University’s Future of Humanity Institute, Nick Bostrom, says:  “It might be that, in this world, we would all be more like children in a giant Disneyland — maintained not by humans, but by these machines that we have created”.

We must embrace the future and learn how to work alongside machines, that will lead to new career paths and job opportunities.

RUBEN TEIJEIRO
Youpal Group co-founder and Chief Technical Officer

Youpal Group’s co-founder and Chief Technical Officer, Ruben Teijeiro, goes further: “We are producing a vast amount of data daily which can’t be processed by humans. Machines can help us to convert this data into knowledge which will support advances in medicine and pharma or development of new scientific research. Humans created machines to help us, not to kill us. We must embrace the future and learn how to work alongside machines, that will lead to new career paths and job opportunities.”

Whether it sounds like a ‘utopian fantasy or dystopian nightmare’ to laymen, there possibility of this super-human, super-intelligent future aided by machines is real. And so are the risks that this prospect entails. What scientists fear isn’t quite the ‘revolution’ imagined, where robots take over and mercilessly dispose of their human counterparts. This Hollywood-inspired script isn’t too far off what is the real concern however: that human life isn’t measured with the same empathy by AI as it is by the human mind. In other words, if AI were controlling a fleet of automatic cars, could it do so by having the same reflexes as a human when gauging the risk to someone’s life? If it were to be distributing medical supplies, would it be able to prioritise with the sensory logic a human being does?

Bostrom believes this is possible. He says that humans are in the process of designing AI “in such a way that it would in fact choose things that are beneficial for humans, and would choose to ask us for clarification when it is uncertain what we intended.” An article on the subject suggests: “There are ways we might teach human morality to a nascent superintelligence. Machine learning algorithms could be taught to recognize human value systems, much like they are trained on databases of images and texts today. Or, different AIs could debate each other, overseen by a human moderator, to build better models of human preferences.” Morality between humans and machines, however, have to be mutual. This could be a crucial way to make sure artificial life safely imitates human art.


					 
subscribe for YouMedia Newsletter
0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

newsletter
subscribe for YouMedia Newsletter
LET'S HANG OUT ON SOCIAL