Here’s a Glimpse into the Future of Work: No Monitors, just 3-D Holograms

The subject of Artificial Intelligence has been discussed through possible applications in the physical replacement of humans in specialised work areas (like pharmacists and doctors, retail environments and offices-just for starters) and what would the New World look like after such a massive social disruption. (see Avatars May Take Over Patient Communications – Where to from there?  and Robots + AI = Rx Replacement? )
Obviously, Artificial Intelligence does not arrive in its “ultimate application” overnight, but it does have the capacity to disrupt many work systems along its journey before it reaches its “ultimate application”. Scientific debate surrounding Artificial Intelligence dates back to 1950 when British scientist, Alan Turing first proposed a test for machine intelligence.
He was well-qualified to do so, having worked at the Government Code and Cypher School during World War Two at Bletchley Park, where he was instrumental in breaking German military codes, in particular, to determine settings for the German Enigma device that was designed to create unbreakable codes.

Turing invented the Turing Machine, considered to be the first model for a general purpose computer.
Turing’s work successfully shortened the war in Europe.
His test for machine (artificial) intelligence has stood the test of time and is still applicable today despite major advances in Artificial Intelligence. Alan Turing, in a 1951 paper, proposed a test called “The Imitation Game” that might finally settle the issue of machine intelligence.The first version of the game he explained, involved no computer intelligence whatsoever.Imagine three rooms, each connected via computer screen and keyboard to the others.
In one room sits a man, in the second a woman, and in the third sits a person – call him or her the “judge”. The judge’s job is to decide which of the two people talking to him through the computer is the man.
The man will attempt to help the judge, offering whatever evidence he can (the computer terminals are used so that physical clues cannot be used) to prove his man-hood.
The woman’s job is to trick the judge, so she will attempt to deceive him, and counteract her opponent’s claims, in hopes that the judge will erroneously identify her as the male.

What does any of this have to do with machine intelligence?
Turing then proposed a modification of the game, in which instead of a man and a woman as contestants, there was a human, of either gender, and a computer at the other terminal.
Now the judge’s job is to decide which of the contestants is human, and which the machine.
Turing proposed that if, under these conditions, a judge were less than 50% accurate, that is, if a judge is as likely to pick either human or computer, then the computer must be a passable simulation of a human being and hence, intelligent.
The game has been recently modified so that there is only one contestant, and the judge’s job is not to choose between two contestants, but simply to decide whether the single contestant is human or machine.
The entry on the Turing Test (click here) is short, but very clearly stated.

Partly out of an attempt to pass Turing’s test, and partly just for the fun of it, there arose, largely in the 1970s, a group of programs that tried to cross the first human-computer barrier: language.
These programs, often fairly simple in design, employed small databases of (usually English) language combined with a series of rules for forming intelligent sentences.
While most were woefully inadequate, some grew to tremendous popularity.
Perhaps the most famous such program was Joseph Weizenbaum’s ELIZA.
Written in 1966 it was one of the first and remained for quite a while one of the most convincing.
ELIZA simulates a Rogerian psychotherapist (the Rogerian therapist is empathic, but passive, asking leading questions, but doing very little talking. e.g. “Tell me more about that,” or “How does that make you feel?”) and does so quite convincingly, for a while.
There is no hint of intelligence in ELIZA’s code, it simply scans for keywords like “Mother” or “Depressed” and then asks suitable questions from a large database.
Failing that, it generates something generic in an attempt to elicit further conversation.
Most programs since have relied on similar principles of keyword matching, paired with basic knowledge of sentence structure.

Although Turing proposed his test in 1951, it was not until 40 years later, in 1991, that the test was first really implemented.
Dr. Hugh Loebner, a professor very much interested in seeing AI succeed, pledged $100,000 to the first entrant that could pass the test.
The 1991 contest had some serious problems though, (perhaps most notable was that the judges were all computer science specialists, and knew exactly what kind of questions might trip up a computer) and it was not until 1995 that the contest was re-opened.
Since then, there has been an annual competition, which has yet to find a winner.
While small prizes are given out to the most “human-like” computer, no program has had the 50% success Turing aimed for and pharmacists may breathe a sigh of relief that their jobs (for the moment) are still secure pending some future breakthrough.

Alan Turing’s imitation game has fueled many years of controversy, with little sign of slowing.
On one side of the argument, human-like interaction is seen as absolutely essential to human-like intelligence. A successful AI is worthless if its intelligence lies trapped in an unresponsive program.
Some have even extended the Turing Test.
Steven Harnad has proposed the “Total Turing Test”, where instead of language, the machine must interact in all areas of human endeavor, and instead of a five minute conversation, the duration of the test is a lifetime. James Sennett has proposed a similar extension to the Turing Test that challenges AI to mimic not only human thought but also personhood as a whole.
To illustrate his points, the author uses Star Trek: The Next Generation’s character ‘Data’.
Opponents of Turing’s behavioural criterion of intelligence argue that it is either not sufficient, or perhaps not even relevant at all.
What is important, they argue, is that the computer demonstrates cognitive ability, regardless of behaviour.
It is not necessary that a program speak in order for it to be intelligent.
There are humans that would fail the Turing test, and unintelligent computers that might pass.
The test is neither necessary nor sufficient for intelligence, they argue.

We fast forward now as to what is actually being delivered to an office environment.
In an earlier article published by i2P titled Avatars May Take Over Patient Communications – Where to from there? you will note some of the futuristic potential described in this article appears in a story prepared by Selina Wang, a reporter at Bloomberg News.
Her story follows in coloured text below the image and is titled:Here’s a Glimpse into the Future of Work: No Monitors, just 3-D Holograms

“Imagine an office without computer monitors, cubicles or chairs.
Everyone is wearing headsets that project their work in the form of 3D holograms.
Meta, an augmented reality startup, is making that reality.
Starting a few months ago, the company started ripping out everyone’s monitors and replacing them with AR headsets.
To stand a chance against its deeper-pocketed rivals like Apple and Microsoft (who are making competing devices), Meta needs to improve and find use-cases for the technology faster than anyone else.

I made the trek to Meta’s California offices from New York to try the technology myself (be sure to check out the awesome video that my colleague David Nicholson shot for this piece; we recorded a podcast about it too).
I was sitting in the office of Ryan Pamplin, Meta’s head evangelist.
His office was bare — just a white desk with a headset sitting on it.
I put the device on and suddenly I saw photos of him and his girlfriend plastered everywhere.
There was also a bust of Steve Jobs and a mini Tesla model “sitting” on Pamplin’s desk.

Meta’s version of desktop icons is a holographic shelf holding a bunch of spheres.
Each of the objects represents a different application.
I grabbed the web browser.
With my hands, I pulled the web page to be twice as large as my regular desktop.
I read some Bloomberg News articles and watched a short video.
I picked up a 3D model of a human eyeball.
I made it as large as my head, and stared at the veins.
I examined a model of the human body, standing up to see the whole thing since it was so large.

The experience wasn’t perfect.
I didn’t like the way the headset felt on my head and I had some trouble grabbing the holographic objects.
But the experience was so immersive that I was disoriented when the headset came off.
The colors vanished and the world looked small.

Since that experience, I’ve become sensitive to how confining our devices are.
We all spend hours a day with our eyes glued to our computer or phone screens.
The moment a text comes up, we hunch over to respond.
As much as I rely on my phone and laptop, I’m more than ready to give them up for a pair of glasses that would project anything I want into the space around me.
It’s still a ways off, but how great would it be to have a movie theater-sized screen to take with me wherever I went?
And never have to carry around a bulky laptop for work?
Or rather than skyping with a friend, have them appear as a hologram.”

It is the last sentence that contains the seeds of destruction for specialist jobs such as pharmacists and doctors.
There is a lot of distance yet to be covered in the field of AI and society will continue to look over its collective shoulder trying to figure out a survival defence.

For the moment, your job seems secure.
How long is still anyone’s guess!
Global drug companies are known to be investing extraordinary sums of money in AI research and holograms.
Amazon is doing a lot of work in the retail area by researching robotics and automated logistic systems.

Leave a Reply

Your email address will not be published. Required fields are marked *