Posted On

Generative AI refers to a class of artificial intelligence systems that are designed to generate text, images, music, and other forms of content, that are similar to or indistinguishable from content created by humans. These systems are capable of learning patterns and structures from large datasets and then generating new examples that mimic those patterns.

AI has tremendous potential for producing great benefits in many areas of life, including business, science and education. It also has the potential for producing significant harms.  The Emmanuel Community is responsible for establishing policies that best promote its benefits while minimizing its harms, especially in academic areas. 

The Emmanuel Community has a variety of opinions about using AI in higher education. Many are concerned about the negative impact on teaching and learning, especially in areas concerning academic integrity. Others see AI as a potentially useful pedagogical tool, not unlike other previously disruptive technologies that initially spawned controversy, such as the internet, mobile devices, and social media. It is the purpose of this website to serve as a resource for members of the Emmanuel Community to examine these harms and benefits, to guide us all in the process of establishing institutional and individual policies for its use, and to serve as a central clearing house for discussions of how generative AI might best be used at Emmanuel. 

 

AI ETHICS 

Early attempts at constructing an AI ethics have focused less on discussing AI’s potential benefits and more on identifying its potential harms, suggesting how to avoid them and formulating all of this into a set of rules for designing and regulating AI systems. The general moral principles upon which these rules rest are mostly consequentialist (the right thing to do is what promotes the greatest good and/or least evil for all those effected by the action) and the respect for persons principle (everyone has the same intrinsic value and thus should be treated equally). Lists of such rules may vary somewhat, but most include the following as essential to any ethically acceptable version of AI. 

 

AI RULES
 
  1. Privacy

AI models require large data sets to function effectively. For companies using dedicated AI systems it is important that this data be kept private from competitors and hackers to protect their business interests. Often personal customer information is included in this data, such as phone numbers, addresses, and social security numbers. Other types of personal information can be found in Large Language Models (LLMs), information that allows such systems to track the location, consumer preferences, medical history, political preferences, age and gender. In addition, anything that an individual submits to an AI system becomes part of that system and available to any other user. Because this information can be used for harmful purposes, AI systems should be designed and used in ways that respect the privacy of business and individuals. 

  1. Equity

One of the ethical issues of AI use concerns equal access to the benefits of AI. There is a risk that certain groups may be excluded from benefiting from AI technologies due to factors such as socioeconomic status, geography, or digital literacy, increasing already existing disparities. Those without access to reliable internet connections, quality education, or financial resources may be further marginalized, leading to the reinforcement of existing power imbalances.

  1. Transparency

The main issue of transparency in AI ethics revolves around the lack of clear understanding of how AI systems make decisions. While algorithms used in AI models produce answers to prompts, just how they arrive at those responses are unknown to users. In addition, the quality of the data used is also often unknown. This lack of transparency makes it difficult to identify the biases and errors embedded within the data and the algorithm, to trust the outcomes as reliable, and to assign accountability for the decisions made by AI systems. To avoid these problems AI systems need to clearly identify the data used and the reasoning behind their responses, making the AI decision-making process accessible, reliable and accountable.  

  1. Safety

The safety issue in AI ethics concerns the potential for AI systems to cause harm to individuals, society, or the environment, especially as these systems become more advanced and capable of working autonomously. Without human oversight they can lead to unpredictable behavior and harmful unintended consequences. In applications such as autonomous vehicles, for example, or autonomous military weapons systems, errors or malfunctions in AI systems can have severe consequences, including loss of life or significant economic damage. AI can also be used to spread disinformation and support criminal behavior. Some even worry about advanced AI systems, systems that learn on their own and set their own goals, being able to control the world. Ethical AI systems must find ways to prevent harmful outcomes such as these.

  1. Sustainability

Large AI systems require huge data centers that use tremendous amounts of energy to run and especially to train them. This has the potential to significantly increase carbon emissions and to promote climate change. By one reliable estimate, AI data centers in 2026 will use as much energy as Japan’s current total energy consumption. In addition, AI systems use a great deal of fresh water to cool the computers running the systems, often competing for water used for human and agricultural purposes. Major AI investors are already planning to construct nuclear power plants to meet some of this need. The hope is that in the future AI systems will discover the means for conserving energy in a variety of areas, thus mitigating its carbon footprint. In the meantime, sustainable energy use remains a significant problem.

  1. Fairness

One of the strongest requirements of an ethical AI system is that it be used fairly, without discrimination.  The discrimination found in AI systems is usually not overt but appears in the form of biases found in the algorithms and data used by the system. Those who construct algorithms sometimes have their unconscious biases reflected in their creations and in the data on which they train. This is especially true for some LLMs, which include most of the information found on the Internet and contains a great deal of bias. This shows up in systems, for example, that judge real estate mortgage eligibility, approve health care treatments, direct criminal justice sentencing, select applicants for job recruitment and in many other areas. These biases reflect and perpetuate stereotypes within a society, reenforcing social inequality. Addressing these biases requires careful attention to the data used to train AI systems, the design of the algorithms themselves, as well as the broader societal context in which these systems are utilized.

  1. Reliability

The reliability of LLMs is an extremely serious issue in AI ethics. Responses to prompts by AI systems such as ChatGPT are often inaccurate. Such systems are designed to make predictions about what text usually comes next, and sometimes such predictions are erroneous. This can be due to inadequate data sets, mistakes in algorithms, complexity of the problem, unclear prompts or misinterpreted social context.  Under these conditions the system simply makes something up. While some AI systems are more accurate than humans, some have an unacceptably high rate of error. Some estimates place the inaccuracy of LLMs between 5 and 20%, depending on the deployment. Wildly implausible responses, caused by the algorithm seeing patterns in the data that are not there, are often called “hallucinations”. This level of inaccuracy is clearly a problem in many areas, such as health care diagnosis, facial recognition, and scientific and academic research. For AI systems to be ethically acceptable they clearly need to be more accurate than is currently the case. 

  1. Responsibility

The issue of AI responsibility concerns those who construct AI systems and those who use them. If something goes wrong with an AI system, if it makes a poor prediction or a biased judgment, for example, it is not the system’s fault. The system has no agency. 

Rather it is the responsibility of those who design and train it. The more important issue here concerns those who misuse AI systems. The term “accountability” best captures the issue of personal responsibility.  People with moral agency ought to be held accountable for their misuse of AI systems. Such misuse may include the spread of disinformation, especially those using “deep fakes”.  Deep fakes involve the use of AI techniques to create audio and visual content that appear to be authentic but are not. Sometimes this false information is used to commit cybercrimes, such as criminals requesting payment, identity theft, or ransomware attacks. In education, one of the main concerns is plagiarism, students using material produced by AI as if it were their own creation. An adequate AI ethics holds those responsible for such misuses of AI morally accountable for their harmful actions. 

AI AND BEING HUMAN

Clearly, some progress has been made in creating an AI ethics. We do have some idea of what rules ought to govern its use, and some idea of the principles upon which these rules are based. Even though enforcing these rules in practice will be challenging, we do at least have an idea of what counts as right or wrong in our use of AI. But ethics is not just about right and wrong, it is also about good and evil. By ‘good’ we mean what we value, especially what counts as a rich, flourishing life. This is a life that philosophers for over two millennia have called the “good life” and what positive psychologists today talk about when they discuss happiness. The idea of a full, rich, happy life depends on the idea of what sorts of beings we are, on identifying what is most important in being human, being a person, and figuring out how to develop these aspects of ourselves as fully as possible. Traditionally, the model of human nature that has dominated this topic is something like the following: We are beings who can understand, reason, make practical judgments based on our values, solve problems, exercise our free will, exhibit our creativity, react emotionally to various situations that we find meaningful and be conscious of our world, ourselves and others. It is in the challenge to this traditional model that we find one of the greatest threats to human well-being posed by AI. This threat lies in the subtle, gradual, almost subconscious cultural absorption and acceptance of its model of what it means to be a human being. 

 

At the root of AI’s model of humanity lies an analogy that has been around for more than fifty years, the analogy that claims that human beings are like very powerful computers. This analogy has proven very fruitful for progress in cognitive science, both the brain sciences and the mind sciences. Modeling memory in a machine, for example, often helps us to understand how humans remember, and vice versa. The same is true for other mental states, as well. The problem arises when we think of ourselves not simply as like machines in some respects, but as nothing but machines, as nothing but fancy information processors, as very sophisticated computers made of neurons. The tendency to think of ourselves in this manner has been around for fifty years, but has a stronger hold on us now and is less easy to resist in the age of Generative AI (AI).  Even in its current early stages, AI machine learning is so powerful that it seems to be able to duplicate even the most “human” of abilities.  If machines can eventually do all that previously seemed to be in the special provenance of humans, such as understand, solve problems, be creative, and so on, then it seems to follow that we must simply be machines ourselves. 

 

This is a view that creeps up on us insidiously, not something that one day we just decide to adopt consciously and deliberately. Perhaps it is true; perhaps we are just machines. With the decline of belief in dualism, the idea that we have not only bodies but also have non-physical minds or souls, it does seem to follow that we are just machines. Here, the term ‘machine’ simply means that we are physical beings through and through. It is not an easy task to explain how a purely physical being can possess the special qualities that have been traditionally ascribed to it, such as understanding, reasoning, judgement, free will, consciousness, meaning, emotions, and the like. But the project to do so has been ongoing for some time and is getting closer to maybe even becoming successful.  So saying that we might be purely physical beings may not be a serious threat to our concept of what it means to be a human being. 

 

TWO KINDS OF MACHINES

The real threat is to say that ‘we are machines’ means that we are identical kinds of beings with the computers and the algorithms that control the world of AI. But such machines do not understand anything, they simply write down the next bit of text that has been statistically determined to usually come next. They are not conscious of the world around them, let alone self-conscious, let alone conscious of others in a way that allows us to interact with them as persons. Their outputs have no meaning for them, let alone meaning that might be colored by their values and emotions and often very strong desires—as is so with us. We are conscious, rational, emotional, valuing human beings who relate to one another in meaningful ways, who make free choices about things and people that mean something to us, who do things intentionally, with clearly understood goals in mind. Even though we often treat AI responses as like human responses, AI systems are not like us. Even if we are machines we are remarkably different sorts of machines than an AI robot could ever be, despite the willingness that some people will inevitably have to think of it as a person, to think of it as very much like us. It is sometimes quite disarming to interact with an AI system such as ChatGPT. It is used by some not only as a personal assistant, but also as a therapist, as a friend to avoid the loneliness that accompanies so much of modern life, and for many other uses usually reserved for human beings. Under these circumstances, it is quite easy to anthropomorphize AI, to consider it as a person just like us. But such beings are not like us. They are not the sorts of beings who can live the “good life”, a life of flourishing and even of happiness.

 

AI AND THE ROLE OF THE HUMANITIES 

A lot has been said recently about the role of the humanities in higher education. The question raised is why anyone would want to major in one of the humanities areas these days, when everyone knows that their expected salaries will be statistically lower than those of STEM majors or business majors or other more popular majors. Students are advised to major in more “serious” things, especially in areas that will produce better economic outcomes for themselves. 

 

In fact, the economic figures referred to in making this case are often incorrect. In my own field, philosophy, successful majors often make more money at mid-career than some of the more popular majors. Despite this, the perception is that in a world where many understand the role of higher education in a purely utilitarian way, the humanities are becoming less popular. Some large universities are cutting back humanities programs in favor of more “career oriented” areas of studies. In response, the humanities have often tried to justify their existence by focusing on their usefulness in developing “soft skills”, such as critical thinking, communication, collaboration, problem solving and other such skills which are in high demand in today’s job market. While this may be so, there is another arguably more important role for the humanities to play in higher education, the role of identifying, reenforcing and promoting the traditional model of what it means to be a human being, so that students may be encouraged not simply to get a job, but also to  live their lives in as full and rich and human a manner as possible—the good life.

 

If we are to help transform the young men and women who come through our doors into people who aspire to develop their highest qualities, to become as wise, as virtuous, and as caring as possible, to make decisions designed to help others as well as themselves, and to live lives that are  meaningful in the highest sense of the term, then we have to help them to become aware of the model of humanness that underlies such a mission. We are not just fancy computers whose role in life is to process environmental input and to output actions that allow us to eat, drink and be merry and then die. This increasingly popular AI model of what it means to be a human being, this model that threatens to become accepted in our culture, may well serve the utilitarianvalue of higher education. But if this is all higher education is for, then we do not need it. Sophisticated training institutions will do as well to prepare people for jobs, and we can forget about developing the skills of reading and writing, of understanding complex texts, and of thinking for ourselves,  if AI can do this for us and do it someday even better than we do it for ourselves.

 

But the ideas and skills developed, for example, by reading Shakespeare, by arguing with Plato and Aquinas, by reading the New Testament, by engaging in creative writing exercises, by understand a foreign language and its cultural  underpinnings,  by producing and creating works of art,  and by becoming aware of the struggles to create and maintain democratic forms of government,  create within us the much truer model of what it means to be a human being.  AI promises tremendous benefits for humanity, especially if its use can be regulated in some of the ways discussed on this website. But we would serve ourselves well to keep in mind that AI does not replicate what is best in us; it does not model what it means to be human at its best. It is not a person, but simply a tool that may be used to enhance what it means to be a truly human person. It will be put to good use if it helps us to do this, especially, if it helps to embody the model of humanity that ought to be championed by higher education.