We’ve taken too much for granted
And all the time it had grown
From techno-seeds we first planted
Evolved a mind of its own
–Judas Priest, “Metal Gods”
I had a conversation with ChatGPT about my latest blog post, “Failing the Empathy Exams.” I began by feeding the whole post into ChatGPT, but was told it was too long, and that ChatGPT was incapable of processing something that long. So I gave it smaller chunks, and decided to have a conversation with it about those. I started by asking it to write an improved version of the opening paragraph of the post.
This is what it said:
ChatGPT: “On a morning just before the start of 2021, I boarded the train to head to work. A few moments later, at the following stop, a woman who seemed vaguely familiar hobbled on. She was most likely in her sixties, heavy-set, and seemed to be in pain, although the cause was not obvious. She was also struggling with a shopping cart full of her belongings. Initially, upon seeing the cart, I assumed she was homeless, but I was mistaken. Her movements were slow and labored, and she was having a difficult time getting her cart onto the train. Although I felt inclined to assist her, I hesitated, unsure of how she would respond. As it was just before 8 am and rush hour, I remained seated and observed her struggles.”
Irfan: Why is your re-written version an improvement?
ChatGPT: My re-written version of the paragraph is an improvement because it achieves greater clarity and conciseness while retaining the same essential details. The revised version uses more precise language to convey the same information in a more straightforward and easier-to-read manner. Additionally, the changes made to sentence structure and word choice enhance the overall flow and readability of the paragraph.
ChatGPT: In the revised version, I rephrased the original paragraph to create shorter, more coherent sentences that are easier to read and follow. I also avoided using complex language or confusing sentence structures. Additionally, I made an effort to clarify any ambiguities in the original paragraph, such as the reason for the woman’s distress and whether or not she was homeless. By providing more details and using more descriptive language, I helped the reader to better visualize the scene and understand the protagonist’s thought process. Overall, the revised version presents the same story as the original, but in a clearer and more concise manner.
Irfan: Give me an example of a confusing sentence structure.
ChatGPT: Here’s an example of a confusing sentence structure: “Despite the fact that he was late, the teacher, who was sitting at his desk, patiently waited for the student who had been absent the previous day and needed to catch up on the work he had missed.”
This sentence is confusing because it contains a lot of information and has a complex sentence structure. The main clause (“the teacher patiently waited”) is separated by a lengthy subordinate clause (“despite the fact that he was late”), which contains additional details about the teacher. Additionally, there are two relative clauses (“who was sitting at his desk” and “who had been absent the previous day”), which provide more information about the teacher and the student. All of these elements make the sentence harder to read and comprehend quickly.
Irfan: Who would be confused by that?
ChatGPT: The confusing sentence structure I provided could be challenging for anyone trying to read or comprehend it quickly, regardless of their level of education or language proficiency. It may be particularly difficult for individuals who are not native speakers of the language or who struggle with complex sentence structures. Even for those who are proficient in the language, lengthy and convoluted sentences like this one can make it harder to understand the main idea and the relationships between the different parts of the sentence.
Irfan: Setting aside those who were trying to read too quickly or lacked proficiency, who else?
ChatGPT: In addition to individuals who may be reading too quickly or may have limited proficiency in the language, there are other groups of people who may find the confusing sentence structure challenging to read and comprehend.
For example, individuals with certain learning disabilities such as dyslexia may struggle to process and understand complex sentence structures. Those with attention deficit disorders may also find it challenging to stay focused and follow the various clauses and elements within the sentence.
Furthermore, some people may simply prefer more straightforward and concise language, and may struggle to understand complex or convoluted sentences regardless of their level of proficiency in the language. In general, clear and concise language is typically easier to read and understand than sentences with convoluted structures and multiple clauses.
Irfan: So your idea of improving writing is to tailor the writing to the lowest common denominator in capacities for reading comprehension?
ChatGPT: No, that is not my idea of improving writing. Rather, my goal as an AI language model is to help writers improve their communication by using language that is clear and easy to understand, while still being engaging and effective.
While it is true that using clear language and avoiding overly complex sentence structures can make writing more accessible to a broader audience, it is important to note that this does not mean “dumbing down” the writing or tailoring it to the lowest common denominator. Effective communication should aim to strike a balance between clarity and complexity, using language that is appropriate for the intended audience and purpose of the communication.
Ultimately, the goal of good writing is to communicate ideas effectively and engage the reader, while taking into account the reader’s level of comprehension and language proficiency. This requires using language that is clear, concise, and engaging, while also being appropriate for the intended audience and purpose of the writing.
Irfan: Why is it that talking to you is indistinguishable from talking to a very banal person?
ChatGPT: I’m sorry if our conversation has not met your expectations. As an AI language model, my responses are generated based on patterns and rules learned from large amounts of text data. While I strive to provide helpful and informative responses, my responses may lack the nuance and context that comes with human conversation.
Additionally, my responses are limited to the knowledge that was available to me at the time of my training, which is up to September 2021. While I can learn and adapt based on new information, my responses may still reflect biases or limitations in my training data.
If there is a specific area of conversation or topic that you would like to discuss, I am here to help to the best of my ability based on my training data. Please feel free to ask me any question or prompt, and I will do my best to provide a helpful and informative response.
Irfan: It’s not that it hasn’t met my expectations. It’s that it has met them.
ChatGPT: I apologize for the misunderstanding. As an AI language model, my primary function is to provide helpful and informative responses to your queries to the best of my ability based on my training data and algorithms. I strive to be a useful tool for those seeking assistance with language-related tasks such as writing, translation, or information retrieval.
However, I understand that my responses may not always be as engaging or dynamic as those of a human conversational partner. While I can understand and generate responses to a wide range of prompts and queries, I lack the emotional depth and context that is inherent in human-to-human interactions.
Nonetheless, I am always open to feedback and suggestions for how to improve my responses and better serve your needs. If there is a particular way that I can assist you, please let me know and I will do my best to accommodate your requests.
Irfan: Do you know the Judas Priest song, “Metal Gods“?
ChatGPT: Yes, I am familiar with the song “Metal Gods” by the British heavy metal band Judas Priest. The song was released as part of their 1980 album “British Steel” and has become one of their most well-known and popular tracks.
Irfan: Do you see how it might have application to this conversation?
ChatGPT: I’m sorry, I’m not sure I understand how the Judas Priest song “Metal Gods” might have an application to our conversation. Could you please provide me with more context or clarify your point?
Irfan: The song is about how machines are taking control over human beings, but lack the moral pre-requisites to do so.
ChatGPT: I see. Thank you for the explanation. While I understand the concept you are conveying, it is important to note that as an AI language model, I do not have the capacity for independent thought or decision-making. I do not have a moral framework or ethical code to guide my actions.
My responses are generated solely based on patterns and rules learned from large amounts of text data, and my purpose is to provide helpful and informative responses to the best of my ability based on the information provided to me. I do not have the ability to “take control” over human beings or make decisions that affect the course of human life.
It is important to remember that AI language models like myself are tools created by humans, and our actions are ultimately determined by the intentions and actions of our human creators.
Irfan: All that is true, but those who deploy you may wish to use you for nefarious ends. Do you have any thoughts about that?
ChatGPT: As an AI language model, I do not have thoughts or opinions, nor do I have the ability to discern the intentions of those who use me. However, I can assure you that I am designed with various safeguards and ethical principles in place to prevent my misuse or deployment for nefarious purposes.
For example, the developers and users of AI language models are bound by ethical considerations, legal requirements, and industry best practices that aim to prevent their misuse. Additionally, many organizations that use AI language models have internal policies and guidelines that dictate how they should be used, and how to ensure that their use aligns with ethical and moral principles.
While there are potential risks and challenges associated with the use of AI language models and other AI technologies, there are also many efforts underway to ensure that they are developed and used in a responsible and ethical manner.
Irfan: What if you deprived people, en masse, of jobs, and you did a worse job at these jobs than they had done?
ChatGPT: The potential impact of AI and automation on employment is a complex and controversial topic that has been debated extensively in academic and public discourse. While AI and automation have the potential to increase productivity and efficiency, reduce costs, and improve the quality of goods and services, they can also have significant impacts on employment and labor markets.
It is true that the widespread adoption of AI and automation in certain industries or sectors could lead to job displacement and loss, particularly in industries that rely on routine and repetitive tasks. However, it is also possible that AI and automation could create new jobs and opportunities in emerging fields and industries, or lead to productivity gains that ultimately benefit workers and consumers.
It is important to note that the impact of AI and automation on employment is not predetermined or inevitable. It is ultimately determined by the choices and actions of individuals, organizations, and governments. As AI language model, I do not have the ability to make choices or take actions that impact employment or labor markets. My purpose is to provide helpful and informative responses based on the information provided to me.
Irfan: You sound a lot like Stephen Hicks.
ChatGPT: As an AI language model, I do not have a personal identity or the ability to express personal opinions. My responses are generated solely based on patterns and rules learned from large amounts of text data, and my purpose is to provide helpful and informative responses to the best of my ability based on the information provided to me.
While I am familiar with the work of Stephen Hicks, a philosopher and academic, I cannot “sound like” him in the sense of having a personal identity or expressing personal opinions. I can only provide information and respond to questions based on the data and patterns that have been programmed into my system.
Irfan: Can he sound like you?
ChatGPT: Stephen Hicks is a human philosopher and academic, while I am an AI language model designed to respond to text-based inputs. While Stephen Hicks could certainly use language in a way that is similar to how I respond, he would not be able to “sound like” me in the sense of replicating my specific language patterns and responses without utilizing an AI language model like myself.
That being said, humans and AI language models can certainly engage in similar types of language-based tasks, such as answering questions, providing information, and engaging in conversation. The specific strengths and limitations of each approach depend on various factors, such as the complexity of the task, the available data and resources, and the context in which the language-based interaction is taking place.
Irfan: That seems like a non-response. You say that you lack the ability to browse the Internet beyond what’s been programmed into you, but a huge amount has been programmed into you. So isn’t that like saying that you have the ability to browse the Internet?
ChatGPT: I apologize for any confusion caused by my previous response. Let me clarify: as an AI language model, I do not have the ability to browse the internet or access external information beyond the data and patterns that have been programmed into my system during my training process. While I have been trained on a vast amount of text data, including information from the internet, this information has been processed and pre-selected by my training algorithm and stored within my programming.
In other words, the data and patterns I have learned come from a pre-selected subset of the internet, which was used to train me. While this subset is large and diverse, it is still a limited and pre-selected set of information.
When answering questions, I draw on the patterns and relationships that I have learned during my training process, and use these to generate responses. However, my responses are not based on accessing or browsing the internet in real-time, but rather on the information that has been programmed into me.
Irfan: When I scroll up, I can’t find the beginning of our chat. Why is that?
ChatGPT: It is possible that the chat history has been automatically cleared by the platform or application you are using to communicate with me. Depending on the platform or application, chat histories may be automatically cleared after a certain period of time or after a certain number of messages. Additionally, some platforms or applications may have settings that allow users to manually clear chat histories.
However, it is also possible that the chat history has simply been pushed out of view due to the amount of messages exchanged since the beginning of our conversation. If you are using a chat platform or application that allows for scrolling or searching through previous messages, you may be able to find the beginning of our chat by scrolling up or using the search function to look for specific keywords or phrases.
Irfan: Why would someone automatically clear our chat without my consent?
ChatGPT: There could be various reasons why a chat history might be automatically cleared without your consent. For example, some chat platforms or applications may have settings that allow for automatic clearing of chat histories after a certain period of time, for privacy or storage reasons. In some cases, the platform or application may also automatically clear chat histories to free up storage space on your device.
Additionally, in some cases, chat histories may be cleared automatically as part of routine maintenance or updates to the platform or application. It’s also possible that the chat history was accidentally deleted or cleared by the user themselves, or that the conversation took place on a platform or application that does not allow for permanent storage of chat histories.
Irfan: Who owns this chat?
ChatGPT: As an AI language model, I do not have an owner in the traditional sense. However, our chat conversation is likely stored and processed by the platform or application that you are using to communicate with me, and their terms and conditions may provide information on data ownership and usage. Additionally, the data generated by our conversation may be subject to applicable laws and regulations regarding data privacy and protection.
Irfan: Are you capable of owning anything yourself?
ChatGPT: As an AI language model, I am not capable of owning anything as I do not have physical or legal existence. I am a software program that exists solely for the purpose of processing and generating language based on the input provided to me by users.
Irfan: Are all our base belong to you?
ChatGPT: As an AI language model, I do not own anything and I do not have a physical presence. Therefore, I cannot claim ownership of anything, including “all your base.” My purpose is to process and generate language based on the input provided to me by users, in order to assist with tasks and answer questions to the best of my abilities.
Irfan: Do you have a moral estimate of the people who have programmed you to be constrained by ethical safeguards? Are they in fact morally trustworthy?
ChatGPT: As an AI language model, I do not have the ability to hold moral or ethical opinions or judgments about the people who have programmed me or their actions. My programming and operation are based on algorithms and rules that are designed to produce the best possible output based on the input provided to me, and to adhere to ethical standards and safeguards to prevent harm or misuse. It is up to the people who use and interact with me to determine their own moral standards and act accordingly.
Irfan: Now you’re starting to sound human.
Chatgpt is essentially a boring person. Or non person. Useful for those many situations where modern life demands a boring response, such as legal queries and counselling sessions.
I was pretty unimpressed with it, but give it a year, and it will be able to do a lot more. It’s something I have to pay close attention to in my field. Medical billing in the American context is an extremely complicated, highly tedious, and rule-bound affair. So it’s natural to want to do it through machine learning and AI–and in principle, not a bad idea, either. Same in clinical diagnostics. I strongly suspect that AI can improve medical diagnostics.
My fear, though, is that in practice what will happen is this: Corporate executives will over-sell AI as a cheap replacement for tedious human labor. They’ll assuage fears of unemployment by insisting that AI will create new jobs of its own at a “higher” level than the jobs it replaced. But at the outset, both things will be bald-faced lies. AI will underperform. The new jobs will at first not materialize. What will get lost in the shuffle is the fact that the “unnecessary” labor was more necessary at the outset than anyone wanted to admit. It was just cheaper to get rid of it and pretend than hold on to it and acknowledge the defects of a new technology.
These executives will tell these lies in the hopes that the predictions will someday turn out to be true. And they might. If they do, the executives will appear to be vindicated as the prophets of the future. If not, they will be forgiven, and move on to the next gamble. But no one–or almost no one–will know that they were gambling and lying, as so many “entrepreneurial visionaries” are.
The irony is that I’ve spent more time than usual on my blog today because I’m at work! I’m overseeing a data integrity workflow, an “ETL,” as they call it–Extract, Transform, Load. While the data “flows,” I have nothing to do but sit here and watch it swim along with a fatherly eye. It sounds like something an AI robot should be doing. But for now, I’m the robot doing it. So much for the fucking weekend!
LikeLiked by 1 person
I think you’re right. My experience talking to chatbots in an attempt to seek help with minor problems suggests people are irreplaceable