What happens when AI Automates Killing
Part 1 of 3 on AI and Societal Impacts
The Dream
All nightmares begin as dreams. Imagine it’s the mid 1950’s and a group of guys set out to create a new kind of intelligence, one based on computer logic. A computer scientist, John McCarthy, at a Dartmouth Conference coined the term ‘artificial intelligence’, or AI for short. Artificial intelligence is, in some ways, an unfortunate name because this form of intelligence is as human as anything else that our species has created. There is nothing artificial about it. The outcomes of the original conference have been nothing short of astounding.
ChatGPT (Chat) was released by OpenAI on November 30, 2022. It is based on what are known as large language models or LLM’s. These models are, in turn, based on natural language that we use every day. They are symbolic logic and subject-predicate sentences and reasoning. Generative AI, which is emergent, refers to the fact that this form of AI can reason for itself outside of human intervention. It’s like having a smart neighbor next door who comes up with better ideas or data than you. Instead of ‘garbage in, garbage out’, as old computer models dictated, it’s garbage in, and better, more useful information out.

Chat, like all new technologies, amplifies our abilities to impact the world. It can process information, handle complex questions, make predictions and structure results. It’s not just a ‘tool’ like a hammer, for example, but a special tool with cognitive capabilities and complex feedback.
Query On ‘Evil’
A few days ago, with the help of a friend, Stephan, I queried Chat to help me understand evil, and how Chat self-referenced the concept and its application. My queries were:
- What is an evil character, including characteristics
- Can AI be used for evil purposes
- Can AI users who commit evil escape from personal responsibility
- Should persons or companies who use AI for evil purposes be held accountable
Chat immediately took the query, thinking for a moment. Then, it began to carry on a conversation with me which stated, in an abbreviated form, the following:
- An evil character is one who does harm to others knowingly and often systematically. It is harm caused beyond defense or protecting one’s immediate life.
- AI is a tool and can be used for evil or good purposes. It is only a tool. It repeated this several times through multiple inquiries.
- One cannot escape from personal responsibility when they use AI in an evil or harmful way. Companies are also responsible for using AI in harmful ways, ways that are destructive.
- There are laws that were written to protect people, and if companies violate the law they should be prosecuted as well. This, it stated, may be an uphill battle depending on courts and jurisdictions.
Lavender or The Gospel AI
Israel’s war against Palestinians in Gaza is the first war in human history to utilize AI to kill masses of people, 80% of whom are civilians, women and children. Israel’s AI, code named Lavender or The Gospel, are structured and oriented for rapid generation of targets, and to promote executions of anyone or anything identified. They are based on machine learning algorithms, data probabilities and associations that are designed to destroy buildings or people. It is fundamentally different from Chat, but AI nonetheless.
What makes them fall under the rubric of AI is that they are able to process large amounts of data from multiple sources, ‘reason’ on probabilities and almost instantaneously draw conclusions based on inferences. They were created for war.

As ‘tools’, they are highly appealing due to their perceived lack of moral constraints—decisions are based on data and technology; low coefficient of resistance—they enable action without friction; outcomes somehow belong to a third party— nothing personal, no moral consciousness required.
Lavender and The Gospel incorporate three components, which are interrelated as part of a system:
- Automated visual and voice surveillance and connections—24-hour physical, facility, and/or individual observations. Cameras, satellites, and visual apparatus are located everywhere. Conversations are picked up on phones, public space receivers or social media. Little, if any, private space exists for individuals.
- Algorithmic identification of items or persons identified as ‘enemy’ populates the system. The system is connected to weapons such as tanks, drones, and aircraft. It simply draws upon its database, develops a set of probabilities and makes a decision to execute.
- Kill orders are given to soldiers and weapons. These orders are relatively straight forward as the AI communicates instructions to soldiers to destroy humans or inanimate objects. Destruction or killing is not mediated by emotions, empathy or regard for humanlife. Imagine that you are identified and networked—visual and data of associations— as affiliating with a social group defined by the Israeli government as a terrorist. Recall that the Israeli government labeled UNRWA, United Nations Relief and Works Agency for Palestinian Refugees, as a terrorist organization and threw it out of Israel. Next, you are tracked to your family’s home. The order to execute is sent to a soldier who tracks you to the house. It turns out that there are three sets of data that indicate that there are others assumed to be terrorists in the neighborhood. An order is then transmitted to a drone, which is loitering in the sky, to annihilate the identified house and neighbors. This destruction can apply to private houses, residential buildings or public spaces.
In one situation, for example, the identified subject for kill walked out the back door to avoid being killed by Israeli Defense Forces. Instead, all 12 members of his family who lived at the identified location were killed, not the suspect. Kill, not executed.
The Israeli Defense Forces (IDF), in order to make its AI effective, has loosened the acceptable rules of engagement and levels of collateral damage. For senior commanders, for example, it allowed for triple digit civilian deaths to military targets; and this assumes that the military targets are accurate. This has included the proliferation of non-military targets.
The Israeli military has invented the category of “power targets”. These are targets that have no military value but are designed to shock Palestinians and drive them to influence Hamas. Leveling high rise buildings, shopping centers, hospitals, and schools can qualify as power targets. If children are part of the kill, that’s just too bad. According to Israeli military sources, all of this is known beforehand including members of a household.
Chat Judging Lavender & The Gospel
According to my inquiry on evil with Chat, both Lavender and The Gospel are evil. They are geared for destruction, participating in the killing of over 75,000 people, and 19,000 children, with thousands more under rubble. It is beyond any kernel of truth to say that these killings were based on defense. The defense that Hamas was everywhere and anywhere, necessitating killing all civilians, including kids shot at close range, is beyond the rationality of Chat or basic human morality.
Members of the Israeli IDF and government should be charged with genocide or minimally, crimes against humanity. American firms that are supporting AI in Israel should be charged as well—–Palantir, Google, Microsoft and a little known but powerful company, Ondas Holdings, West Palm Beach Florida. Palantir, in spite of its support for genocide in Israel, supplies New York City hospitals with software to review patients records. They are currently the subject of protests calling for them to be removed.

Chat’s determination is similar to that of human groups for justice. Israel is evil in its activities in Gaza against Palestinians as defined by AI. It is also busy exporting evil to the U.S., like the Departments of War’s contract to purchase $210 million of advanced cluster shells for bombs from Israel. Israel dropped cluster bombs in Lebanon in violation of UN law. As of 2025, 120 nations have signed a treaty that bans the production, transfer and use of cluster bombs. Cluster bombs stay alive in fields for years, wounding or killing people after conflicts end.
Call and/or email any of these organizations and tell them to stop supporting genocide in Gaza. The more calls, the better.
