93.2% of Gen Z feel positive about AI

This article is part of a series of articles titled “Environment in Danger? Opportunities and Risks From a Young Generations’ Point of View”. Even though 70% of the younger generation think that gluing yourself to the street is not the right way to deal with the current situation, the prevailing feeling is that something must be done about climate change. We asked GrECo’s youngest generation for their opinions.

The United Nations has recently indicated that the youth of today have the most positive outlook on Artificial Intelligence (AI) with an astonishing 93.2% stating they feel positive about it, and 68% of those state full trust in this new technology. Furthermore, 80% of the younger generation uses AI tools daily.

How can AI help industries protect the environment?

The ever-increasing use of AI is spreading across all industries and sectors as we experiment and explore how this novel technology can help us in a myriad of different ways.  From automotive to agriculture, mechanics to media, AI technology is taking the world by storm.  It is prevalent in so many different facets of our everyday lives.  Take self-driving cars, for example.  It is believed that because these vehicles rely on AI to plan and take the most efficient routes possible, that it could cut CO2 emissions on the roads in half.

Another example is the agricultural industry which has been benefitting tremendously from this relatively new technology. AI equips farmers with real-time insights into their crops allowing them to treat the crops and optimize resource usage. This practice, for instance, has led to a 30% increase in peanut harvests in India. The application of AI has also been seen in monitoring satellite images and analysing historical agricultural trends which has then led to the accurate prediction of hazardous weather patterns. The really exciting bit is this practice could also be used on a large scale to monitor corporations´ and businesses´ emission targets.

What risks is AI posing the environment and us as individuals?

Despite potentially being a good tool for fighting climate change, AI also has some considerable downsides. Two of which are having a huge environmental impact: Emissions and water consumption.

We previously said that AI in self-driving cars could in theory cut emissions in half, however on the flip side to this, to train a single AI model nearly 300,000 kilograms of carbon dioxide equivalent emissions are released in the atmosphere. Put in perspective, this is equal to five times the lifetime CO2 emissions of an average car in the US.  This knowledge naturally brings the applicability and environmental friendliness of AI into question. Some other potential future considerations are how we power AI farms on renewable energy sources. Since companies are becoming increasingly aware and trying to implement their ESGs, the pollution that AI causes raises the question of practicality and feasibility of developing AI models to the level where they produce more benefits than damage in terms of emissions.

In addition to emissions, AI consumes colossal amounts of water:  Uploading videos to TikTok, asking a question on ChatGPT, and even a google search require huge amounts of electricity in the provider’s data centres. To keep the computers in the data centres from overheating, water is needed. And not just any old water will suffice.  The cooling water needs be of drinking water quality to avoid bacteria and corrosion. According to Microsoft’s latest environmental report, the amount of water the company used worldwide in 2002 was the equivalent to 2,500 Olympic-sized swimming pools worth. It also shows their water consumption in 2022 increased by a dramatic 34% compared to 2021. The knock-on impact on drinking water stocks in the region was therefore considerable. According to the experts, a decisive factor in the increased use of this water is likely to be Microsoft’s increased AI focus and its partnership with the ChatGPT producer OpenAI, for which Microsoft also provides computing capacity.

As if the environmental factors aren’t enough of an eye-opener as to whether Gen Z should so willingly be embracing AI, there are then sociodemographic factors to consider.  For AI to develop, the insertion of immense amounts of data and good processing systems are needed. As the intelligence of the system increases, so does the data available to it. AI may therefore be prone to make unethical or discriminatory decisions. This example was best depicted by Rumman Chowdhury in relation to an applicant´s eligibility to apply for a loan. In the 1930s, banks in Chicago were renowned for not granting loans based on demographic and ethnic criteria. If AI is given all the historical banking data, without specifying gender and race, according to Chowdhury, the software would still be able to pick this information implicitly leading to biased decision-making especially with regards to African American communities in the area. There is no doubt that improper model development and observation can lead to biased decision-making which would in turn cause unexpected consequences and increased responsibilities for companies.

And the risks don’t stop there.  AI is known for its ability to easily breach security standards and infiltrate everyday applications. In fact, 77% of regular tech users use AI powered software daily without even realising it. Since it is extremely easy to disguise an AI app as a genuine online service, it is also very easy to create a data breach and extract data from end users. AI software created with malicious intentions is a very powerful tool for endangering data privacy and will undoubtedly bring certain risks that are currently not even present in the cyber realm. Within digitalization, being able to retrieve sensitive information from individuals and use the next generation AI technology known as “deep fake” could easily lead to identity theft on a level so far unprecedented in our society. Future risks aside, AI software is already in possession of a significant amount of confidential data, and it is not necessarily through theft. Research by Cyberheaven found that 11% of the data employees paste into ChatGPT is confidential.  Employers, businesses, and society as a whole need to take serious steps to introduce regulations and precautionary measures before they happily open the door to AI.

So, what can we conclude from all of this? AI holds both promising benefits and significant challenges. By enabling completely new technologies such as self-driving cars, new complex risks arise. In contrast to that, the risk assessment could be enhanced with the help of AI through data analysis etc. When taking advantage of all the benefits of AI, companies should also strive to navigate it’s concerns such as the environmental impact, ethical decision-making, and security vulnerabilities.

Marko Talic

Account Executive

T+43 664 888 447 96

Related Insights