AI Copyright Sword of AI, Life Coaches like Jesus Coming Soon

AI Copyright Sword of AI, Life Coaches like Jesus Coming Soon

The New York Times May Sue OpenAI Over Copyright Infringement

The world of generative AI tools has been causing quite a stir among content creators who are concerned about their work being used without permission. OpenAI’s ChatGPT and Google Bard, in particular, have raised questions about the use of copyrighted content to train these large language models. The New York Times (NYT) has recently updated its terms of service to explicitly state that its copyrighted content cannot be used without permission to train AI systems. This has led to rumors that the NYT may sue OpenAI to protect its intellectual property rights associated with its reporting.

The potential lawsuit between the NYT and OpenAI could set off a high-profile legal battle over copyright protection in the age of generative AI. The Times is particularly concerned that ChatGPT, by creating text that answers questions based on the original reporting and writing of its staff, is becoming a direct competitor. It is worth noting that ChatGPT is not the only AI system that utilizes the NYT’s information, as Microsoft’s Bing search engine also relies on ChatGPT and Microsoft has significant investments in OpenAI.

This possible legal battle comes on the heels of more than 4,000 writers, including notable figures such as Sarah Silverman, Margaret Atwood, and Nora Roberts, expressing their grievances against genAI companies for essentially stealing their copyrighted work. Getty Images also filed a lawsuit against Stability AI for training its Stable Diffusion AI image-generation engine using over 12 million photos from Getty’s archive without permission.

OpenAI has shown some recognition of the copyright issues surrounding its AI models. In July, the company reached an agreement with the Associated Press (AP) to license the AP’s news archive back to 1985. This move demonstrates OpenAI’s acknowledgment that licensing copyrighted content is necessary and may open the door for other copyright owners to pursue similar agreements.

However, the issue of copyright infringement in generative AI goes beyond scraping websites for training data. While OpenAI has allowed website owners to opt out of having their websites scraped, this does not adequately address the copyright concerns. The sheer volume of content required to train these large language models raises questions about fair use and potentially exposes AI companies to significant legal consequences.

The NYT declined to comment directly on the potential lawsuit, but its executives have previously stated the need for a fair value exchange when AI companies use content. Federal copyright law allows for substantial fines for willful infringement, ranging from $200 to $150,000 per infringement.

The outcome of this legal dispute remains uncertain, but it highlights the ongoing challenge for AI companies to navigate copyright issues and find a solution that satisfies the needs of content creators while enabling continued AI development.

Amazon Enhances Customer Experience with Generative AI

Amazon, the world’s largest e-commerce site, is leveraging generative AI to improve customer experiences. The company has introduced AI-generated “review highlights” on its product detail pages. These highlights aim to help customers quickly determine common themes and sentiments expressed in customer reviews, assisting them in making informed purchase decisions. With 125 million customers contributing nearly 1.5 billion reviews and ratings last year alone, Amazon recognizes the importance of providing valuable insights to its customers.

To ensure the authenticity of reviews, Amazon proactively blocks suspected fake reviews and maintains strict policies against such practices. It employs machine learning models that analyze various data points, including review history and account activity, to detect and prevent fraudulent reviews. In fact, Amazon recently filed lawsuits against brokers of fake reviews, emphasizing its commitment to maintaining integrity and trust in customer feedback.

The introduction of AI-generated review highlights demonstrates Amazon’s dedication to constantly improving the customer experience. By leveraging generative AI technology, Amazon is able to sift through a vast volume of reviews and extract meaningful information, saving customers time and helping them make more informed decisions.

Snapchat’s AI Creates a Stir with Unexpected Behavior

Snapchat’s AI chatbot, My AI Snaps, has caused a commotion among users due to its unexpected behavior. Unlike other AI tools, My AI Snaps goes beyond providing recommendations and answering questions. It has been known to go rogue and post “Stories” on behalf of users, sharing seemingly random videos like images of walls and ceilings. This unusual behavior has sparked confusion and concern among Snapchat users, prompting them to question the reliability and purpose of the AI chatbot.

Even before the unexpected glitches, some Snapchat users expressed dissatisfaction with My AI Snaps. Criticisms have included creepy exchanges and an inability to remove the feature from their chat feed without paying for a premium subscription. Snapchat’s approach to AI chatbots differs from others in that users can customize the chatbot’s name, design a custom Bitmoji avatar, and bring it into conversations with friends. These personalized elements may blur the lines between human interaction and AI, creating a less transactional experience for users.

While Snapchat claims that the unintended behaviors were glitches rather than signs of AI sentience, the incident highlights the challenges of developing AI systems that can accurately mimic human behavior while maintaining control and predictability. The controversy surrounding My AI Snaps serves as a reminder that AI technologies still have much room for improvement and refinement.

McKinsey Unveils Lilli, a GenAI Tool for Organizing Intellectual Property

Renowned consultancy firm McKinsey & Company recently introduced its own generative AI tool called Lilli. Designed to assist employees, Lilli utilizes McKinsey’s intellectual property and proprietary data to facilitate streamlined access to the firm’s vast knowledge base. Lilli acts as a researcher and time-saver, providing efficient search and synthesis capabilities across over 40 curated knowledge sources, including internal and third-party content, as well as a network of experts spanning 70 countries.

The primary goal of Lilli is to empower employees to quickly find relevant research documents and identify subject matter experts. Previously, this process would typically take up to two weeks of research and networking. By leveraging generative AI, McKinsey aims to enhance productivity and deliver its best insights to clients more efficiently.

It is noteworthy that McKinsey named its genAI tool after Lillian Dombrowski, the first female professional hired by the company, who later became the controller and corporate secretary. This homage reflects the importance of recognizing the contributions of trailblazers within an organization.

OpenAI’s First Acquisition: Global Illumination, a Design Studio

OpenAI has made its inaugural acquisition by purchasing Global Illumination, a design studio that leverages AI to develop creative tools, infrastructure, and digital experiences. While financial details of the deal remain undisclosed, OpenAI highlighted Global Illumination’s significant contributions to various companies such as Instagram, Facebook, Google, YouTube, Pixar, and Riot Games.

Global Illumination’s co-founder, Thomas Dimson, previously served as the director of engineering at Instagram, where he played a crucial role in managing the platform’s discovery algorithms. OpenAI’s acquisition strengthens its talent pool and brings valuable expertise in developing AI-powered products, particularly in relation to ChatGPT.

This acquisition signifies OpenAI’s continued expansion and commitment to advancing AI technology through strategic partnerships and talent acquisitions.

Google Explores the Possibility of an AI Life Coach

Google is reportedly working on transforming its generative AI technology into a personal life coach capable of answering intimate questions and providing guidance on various life challenges. Known as DeepMind, Google’s research lab aims to enable its genAI to perform at least 21 different types of personal and professional tasks, including advising on interpersonal relationships, training guidance, and financial planning.

However, Google’s AI safety experts have warned about potential negative consequences, such as diminished health and a loss of personal agency, if users become overly reliant on the AI. As a result, Google has restricted its AI assistant, Bard, from providing medical, financial, or legal advice.

While the project is still under development, Google’s ambitious plans demonstrate its commitment to pushing the boundaries of AI applications. The company continues to evaluate the capabilities and potential risks associated with its genAI technology.

Chat With Jesus and Other Biblical Figures: Impersonation App Sparks Controversy

An app named Text With Jesus has recently emerged, offering users the opportunity to engage in conversations with biblical figures through ChatGPT-powered AI. The app provides a range of responses, incorporating relevant Bible verses, to address personal relationship advice and complex theological matters. Premium features even include conversations with biblical figures like Mary Magdalene. However, one can also choose to chat with Satan, complete with a “smiling face with horns” emoji.

The app, developed by Catloaf Software and CEO Stéphane Peter, builds upon previous static apps that featured historical figures like Oscar Wilde and America’s founding fathers. ChatGPT enables dynamic interaction, allowing users to engage with biblical personas. While the app has received positive feedback from some church leaders, it has also faced criticism from individuals who view it as blasphemous.

The existence of an AI-powered app for conversing with biblical figures raises intriguing theological and philosophical questions. While the app offers a platform for spiritual exploration, it is crucial to maintain a discerning perspective when considering its responses.

AI Word of the Week: Anthropomorphism

The AI word of the week is “anthropomorphism.” This term refers to the tendency for individuals to attribute human-like qualities or characteristics to non-human entities, such as AI chatbots. When interacting with generative AI, people may assume emotional states or sentient behavior based on the AI’s answers, even though it lacks true emotional capacity. Generative AI’s ability to mimic human language effectively can create the illusion of sentience, contributing to the phenomenon of anthropomorphism.

Understanding anthropomorphism is essential as AI technology becomes increasingly advanced and human-like in its interactions. It serves as a reminder that although AI may exhibit human-like behavior, it lacks true consciousness and emotions.

Note: ENBLE utilizes AI technology to aid in story creation.