There are products The AI tools I can use may be more ethical than others?
– Bet’s Choice
No, I think any generative AI tool for the main players is more ethical than anyone else. This is why.
For me, ethics Generated AI The way usage can be broken down into models (especially how to access the data used to train them) and ongoing concerns about it Environmental impact. To power a chatbot or image generator, a lot of data is required and the decisions made by developers in the past and continue to make to get this data repository is questionable and secretly shrouded in. Even people in Silicon Valley call it an “open source” model, hiding training datasets in it.
Despite complaints from authors, artists, filmmakers, YouTube creators and even social media users Who doesn’t want their posts to be scratched And turned into chatbot sludge, which AI companies usually act like these creators’ consent, using their output as training data. A familiar claim from AI proponents is that a large amount of data is obtained with the consent of humans that are formulated, which is too clumsy and will hinder innovation. Even for those owned companies Conclude a licensed deal For the main publisher, “clean” data is an infinite part of the huge machine.
Although some developers are working hard Quite compensation When people use jobs to train AI models, these projects remain fairly niche alternatives to mainstream behemoths.
Then there are ecological consequences. Among the main options, the current environmental impact of generating AI usage is similar. Although generated AI still represents a small part of the total human stress on the environment, Gen-AAI software tools require more energy to create and run than non-generating peers. Using chatbots for research assistance is not just about searching the web in Google, but also about making more contributions to the climate crisis.
May reduce the amount of energy required to run the tool – New Method DeepSeek’s latest model SIP’s precious energy resources rather than picking them, but large AI companies seem to be more interested in accelerating development than pause to considering ways that are harmful to the planet.
How do we make AI smarter, more ethical, not smarter, and more powerful?
–Galaxy Brain
Thank you for your wise question, fellow countrymen. In building generated AI tools, this dilemma may be more like a common topic for those you would expect to build generated AI tools. For example, anthropomorphism “Constitutional” approach Claude Chatbot attempts to instill core values into computers.
The confusion at the heart of the problem can be traced back to how we talk about software. Recently, several companies have released a focus on “reasoning” and “After thinking chainMethods for conducting research. Describing the role of AI tools on human words and phrases makes the boundaries between people and machines unnecessarily hazy. I mean, if the model can really reason and have a chain of thought, why can’t we send software to the self-inspiring path?
Because it doesn’t think. Words like reasoning, thinking, understanding, etc. – these are ways to describe how algorithms process information. When my ethics on how these models are trained and the environmental impacts, my position is not based on mergers. Prediction mode or words, but the sum of my personal experience and close beliefs.
The moral aspect of AI output always returns to our human input. What is the intention of the user prompt when interacting with the chatbot? What are the biases in the training data? How do developers teach robots to respond to controversial queries? Rather than focusing on making artificial intelligence itself wiser, it is better to cultivate more ethical development practices and user interaction.