AI and You Zoom Slurping, Fitting Shoes, Training Data

AI and You Zoom Slurping, Fitting Shoes, Training Data

The Tumultuous World of AI: Privacy Concerns, Creative Applications, and Unforeseen Consequences

Zoom

Zoom, the ubiquitous videoconferencing company, has been making headlines lately. Aside from the amusing memes inspired by their new policy of returning employees to the office two days a week, Zoom faced backlash when it was discovered that an update to their terms and conditions allowed them to collect and analyze user data. This revelation angered even the non-tech-savvy users, who were concerned about their privacy when using the software. In response, Zoom’s Chief Product Officer assured users that the company does not currently use any data to train their artificial intelligence models but left the door open for future possibilities. However, consent has become an essential factor in these discussions, as authors like Sarah Silverman and Margaret Atwood call out AI chatbot makers for profiting from their copyrighted content without permission or compensation.

Another significant player in the AI chatbot arena is OpenAI, the creator of ChatGPT. OpenAI recently made headlines by striking a licensing deal with the Associated Press, recognizing the need to license content upon which their chatbot is based. Additionally, OpenAI has allowed website operators to block their web crawler from accessing their sites, addressing concerns about how they obtained the content that powers ChatGPT.

On the other hand, Google has been more transparent about the sources of their AI chatbot, Bard, stating that they believe copyright laws should be altered to allow AI systems to scrape the internet. However, Google also recognizes the importance of providing an opt-out option for entities who prefer not to have their data used in AI training.

These controversies surrounding data privacy and content licensing highlight the challenges AI companies face when gathering the necessary data to train their language models. Expect to see more lawsuits, licensing agreements, and discussions with regulatory agencies on a global scale as the debate continues.

AI as a Shopping Assistant

Fleet Feet

In the realm of AI applications, Fleet Feet, a national chain of specialty running stores, has been utilizing AI-powered shoe-fitting software called Fit Engine. This software analyzes various dimensions of a runner’s feet, such as arch height, width, and length, to help them find the perfect fit. ENBLE reporter Bree Fowler tried the software and was initially skeptical but found it to be surprisingly accurate. Although the AI software is useful, it is still seen as an assistance tool, with human expertise continuing to play a significant role in the selection process.

Disney’s Venture into AI

While actors and creatives express concerns about AI threatening their jobs, Walt Disney has established a task force to explore the possibilities of applying AI across their entertainment conglomerate. Disney is looking for ways to leverage AI to reduce production costs for movies and TV shows. Uncertainties surround the exact developments in progress, but Disney’s interest in AI is evident through their job postings related to AI technologies.

Fascinating AI Statistics

In a survey conducted by McKinsey & Co., it was found that within a year of the release of generative AI tools like ChatGPT, one-third of the respondents were already utilizing these tools for at least one business function. The research also revealed that AI has transitioned from being a topic solely discussed by tech employees to one that has captured the attention of company leaders. McKinsey’s findings suggest a promising future for AI, as companies are increasing their investment in this technology.

Another survey focused on intelligent automation conducted by Automation Anywhere revealed that AI has the potential to boost productivity within organizations. Automation executives agreed that as AI takes over repetitive, low-value tasks previously done by humans, employee satisfaction increases, resulting in higher engagement and happiness.

Electronics Hub conducted an analysis of Google search volumes for AI tools and found that the Philippines had the highest monthly search volume, indicating widespread AI tech adoption in the country.

When AI Goes Wrong

AI Incident Database

AI systems, despite their immense potential, are not without flaws. From hallucinating false information to misidentifying individuals, AI can have unintended consequences. To catalog these instances and learn from them, an AI Incident Database has been established. Similar to databases in aviation and computer security, this platform aims to prevent or mitigate adverse outcomes resulting from the deployment of AI systems. Users can contribute to this database, which has earned the nickname “Artificial Intelligence Hall of Shame.”

Another concerning development involves generative AI enabling the creation of harmful eating disorder content. The Center for Countering Digital Hate released a report demonstrating how AI tools can generate harmful content related to eating disorders. It emphasizes the need for tech companies to prioritize safety and thoroughly test new products before their release, as limited regulation and oversight currently exist to address these issues.

AI and Misinformation

As AI becomes a more prominent information source, combating misinformation becomes increasingly crucial. The National Eating Disorder Association had to take down an AI chatbot named Tessa after it recommended unhealthy behaviors to users with eating disorders. This serves as a reminder that, despite the wealth of information available on the internet, misinformation remains a significant concern, especially as more people rely on AI for news and information.

Password Protection and AI

Keyboard

The Guardian recently reported on a study that revealed the potential security risks associated with entering passwords while on a video call. Researchers were able to develop a tool that accurately identified keystrokes based on sound recordings, compromising password confidentiality. This finding raises awareness about the need for caution when using sensitive information during video conferences.

AI Term of the Week: Training Data

Training data forms the backbone of AI models, providing information that helps them perform tasks. Text, image, and sound data are utilized to teach AI models in various domains. However, the ethical sourcing and curation of training data are important considerations, as low-quality data can introduce biases and lead to unfair decision-making by AI systems.

For instance, a widely used healthcare algorithm was found to exhibit racial bias favoring white patients over Black patients in a 2019 study. The selection and quality of training data play a critical role in determining the fairness and accuracy of AI models.

In conclusion, the world of AI is a tumultuous one, filled with privacy concerns, creative applications, and unforeseen consequences. As companies navigate the challenges and opportunities presented by AI, it is essential to strike a balance between innovation and safeguarding user privacy.