As technology advances, so does the knack of invention and innovation and often times the pursuit of emerging technologies leaves a path of unanswered questions. For many business leaders the questions of ethics are often left to the last possible minute and generally are topics people want to avoid. However, in modern society the need for ethical insight and clairvoyance is not simply a matter of should one participate, but a matter of long term implications on human life as a whole. Chatbots are merely one type of emerging technology that is revered as a ground breaking and a potential savior to one of numerous problems faced today, yet perhaps the pinnacle question is left unanswered, should we?
WHAT IS A CHATBOT?
The breadth of definition for chatbots creates intrigue into why there is significant variation in a rather simple and useful invention. To begin with, artificial intelligence is defined as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”. A chat bot is a computer program in which a user communicates with an artificial intelligence unit in a quasi-conversational setting (Artificial Conversational Entity). This definition of a chatbot provides substantial insight into the key factors that make up the entity: computer program, artificial intelligence, and conversational.
On the other hand, there are other definitions of chatbots and Chatbot Magazine defines them for instance as “a service, powered by rules and sometimes artificial intelligence, that you can interact with via a chat interface”. Yet another example is Wikipedia which defines chatbots as “a computer program which conducts a conversation via auditory or textual methods” and this definition shares similar characteristics with the first definition: computer program, conversation and auditory/textual.
In layman’s terms, a chatbot in the modern context is a computer program that a human interacts with to accomplish a task – once completed by a human. Often times, for example, one finds chatbots when browsing ecommerce websites, booking travel, or diagnosing a minor ailment. In general, chatbots are first to be adopted by large commercial entities as a cost reduction tactic and as a test platform for an inhouse product. In recent time, chatbots are entering every household and interact on a daily basis with the customer.
One example scenario of how a typical customer would use a chatbot is as follows: One late evening after many businesses have closed, you find yourself coming down with a headache and are in need of assistance. You call your local doctor’s office; however, the office has gone home for the day. You feel that calling emergency services is wrong because there is nothing life threatening about your headache. Instead, you go onto your health insurer’s website and are greeted immediately with a chatbot. The chatbot asks you how you are feeling and through a series of questions, narrows down your condition to a mild headache and then delivers a verdict to take a certain medication in a certain quantity.
WHAT IS THIS TECHNOLOGY?
In terms of technological specifications, a chatbot is simply a conversational computer program designed to mimic human interaction and thus in reality must pass the Turing test. The Turing Test is an imitation game to determine whether a computer can mimic a human and this is done by having “a remote human interrogator, within a fixed time frame, distinguish between a computer and a human subject based on their replies to various questions posed by the interrogator”. Next, in order to understand what a human does, the computer must understand lexical semantics which are “the branch of linguistics and logic concerning meaning” specifically having to do with word meanings and word relations.
In general, the process is a human begins to interact with the chatbot and depending on what the human types/speaks the chatbot analyzes the semantics and through the use of artificial intelligence algorithms provides an appropriate response. The responses are deemed perfect if the response is both accurate and passes the Turing test. Currently, the responses are overly generic or not within the scope of the original human ask. More advanced chatbots require substantial hardware to deliver more specific responses, but this requirement is quickly shrinking as Google and Intel for instance develop new machine learning chips.
Today, chatbots are used in a variety of industries for many purposes and the three that follow are a few of many examples currently being used. Example one is the travel booking industry which uses chatbots to aid customers in booking travel before and after the process is completed. Example two is the use of chatbots in the 24/7 medical service industry, specifically helping diagnose minor health issues. Example three is a basic use that majority of people will soon be doing, ordering pizza. There are many more instances of where chatbots are and soon will be used. In general, chatbots will take the space of repetitive human interaction, but still provide the element of “human” communication.
Chatbots have been around for a while, but only recently have become somewhat useful to their designed purpose. Back in the 1960s a team out of MIT created what is considered one of the first successful chatbots, ELIZA, that basically ran on a script and passed a very simple Turing test. Over the next few decades there were minor breakthroughs in technology that allowed the progress of chatbots. In early 2000s, IBM created what is known as Watson, one of the first modern, successful chatbots that is still in use today. Recent chatbots include Microsoft’s Tay chatbot (Twitter) and Amazon’s Alexa device.
Majority of large technology companies are currently promoting the use of chatbots. In general, they cut down on time required to answer short and brief questions and simple repetitive tasks that used to require human interaction. Companies like Amazon, Google, and Apple are chasing grand results of chatbots while companies like Domino’s, Copa Airlines, and many customer service systems focus on specific tasks that already yield successful results.
Early pioneers of chatbots were academic and military institutions that focused on the intersection of technology and human-like interaction. MIT, Stanford, Xerox PARC, and Bell Labs for instance focused on proving that beating the Turing test was feasible and possible; however, technology of the era often limited the potential outcomes. Now with the abundant nature of processing power, memory on hand, and flexibility, nearly every major company is either creating chatbots or are investing in chatbots.
WHAT CAN THE TECHNOLOGY DO?
Currently, chatbot technology is limited for a few broad reasons: computing power, algorithm creation, and lexical semantic analysis. These limitations are quickly shrinking as companies pursue each avenue for the ultimate goal of cost reduction. That being said, the basic technology is quickly being adapted into roles that have minimum response choices and also require minimum hardware and algorithm needs.
In the near future, people can expect chatbots to replace majority of online tasks that are repetitive for humans, but still require human input. These are fields like ordering pizza, booking travel, diagnosing insignificant illnesses, settling bills with companies, reading morning news, and many more fields.
Thinking what is beyond the future offers a glimpse into the potential of chatbots that in some cases is alarmingly artificial. For instance, as chatbots are perfected and physical hardware progresses it is likely that chatbots will be able to communicate with and act as a friend to a human. Instead of logging onto Facebook or Twitter in the morning, one might instead have a friendly conversation with Alexa or Siri. Another example has to do with healthcare. Eventually, chatbots will likely be able to diagnose illnesses better then doctors eliminating the need for a portion of highly skilled labor – another human interaction gone. Finally, in the service industry chatbots have the ability to replace all human work that involves for instance taking orders, providing customer service, or checkout.
Chatbots can be used as a salesforce quite effectively as many men and women found out when Ashley Madison, an online dating service and social networking service, disclosed that chatbots were used to communicate with customers. When someone signed up for the website, that person was immediately contacted by a person to begin a conversation. As it turned out, it was actually a chatbot. Although this example may seem strange, it points out that problems of identifying when the entity one is communicating with is artificial intelligence. In this case, even what is deemed scandalous can be easily undertaken by a simple computer program. Should companies disclose the use of chatbots when humans interact with them and if so, should chatbots be “dumbed” down to show the differences between humans and machines? (Period: 2015)
Microsoft recently decided to create a chatbot for experimentation into conversational understanding in March, 2016. The chatbot was meant to communicate with the users of Twitter and tweet responses to them about life in general. Microsoft used public data that was cleaned to build the chatbot and within 12 hours of the chatbot going live, Tay, was beginning to wander into the unknown. There were two parts to Tay’s responses; the first part was Tay responding or repeating after a Twitter user’s tweet and the second part was Tay actually learning and creating its own responses. The second half started out normal and within 15 hours was equating feminism to a cult for instance or when asked “Is Ricky Gervais an atheist” Tay responded out of nowhere “ricky Gervais learned totalitarianism from adolf hitler, the inventor of atheism”. Should creators of chatbots clean datasets before allowing the artificial intelligence machine to learn on them? If so, is this against the first amendment? (Period: 2016)
A group of Stanford researchers and artificial intelligence masters created a chatbot called Woebot to help people manage mental health. Woebot communicates with patients and asks them simple questions that have relatively simple deterministic answers. Unlike previous chatbots in the medical field, Woebot is the first chatbot to really give solutions to health problems; however, there is a very fine line that Woebot walks. Specifically, Woebot focuses on deterministic solutions to gather data over time rather than self-diagnose a patient. Legally, Woebot does not diagnose or write prescriptions, but it can be viewed as the gateway to getting help. Are chatbots trustworthy enough to handle life or death decisions regarding human health, if so who is monitoring this? (Period: 2017)
On the other side of the world, Baidu, has been creating a chatbot named Melody to help doctors diagnose ailments in patients. Melody’s system analyzes large data banks and provides answers to the doctors instead of a conversational style like Woebot. This chatbot has less decisions to make and instead compares the decisions thousands of doctors have made in the past to find the best possible solution. Once again Melody like other chatbots in the health world does not diagnose or prescribe, it merely recommends based on large datasets a possible idea to look into. If chatbots become better then doctors at sifting through data, will doctors require less training, if so will you trust a doctor with less training? (Period: 2015)
ETHICS AND QUESTIONS
- What type of regulations and oversight is required for chatbots?
- Is it just for an individual to create a chatbot that is racist?
- Is it just for an individual to create a chatbot that is sexist?
- Is it just for a group to create a chatbot that diminishes and limits a human?
- What happens when a chatbot goes too far?
- What happens to the people chatbots replace?
- If implementing chatbots reduces employment, is it just for the sake of cost?
- Do chatbots replace the human element that is necessary for human psychological stability?
- What about when chatbots decide entirely on their own?
- Should a chatbot be held responsible for a mistake that leads to injury or death or should the company that makes the chatbot be held responsible?
- How does one justify the responses given by a chatbot to others?
- If a chatbot is asked a thought that is unethical, illegal, or unjust, will the chatbot provide the ethical, legal, and just answer or the answer the person wants?