Last week was marked by an increase in the number of Chat Bots and Elon Musk’s announcement of his plan to create a Chat Bot that prioritizes seeking maximum truth.

Last week was marked by an increase in the number of Chat Bots and Elon Musk’s announcement of his plan to create a Chat Bot that prioritizes seeking maximum truth.

Staying up-to-date with the rapidly evolving AI industry is no easy feat. Until an AI can assist you, here’s a convenient summary of last week’s news and notable research and experiments in the world of machine learning that we may not have covered separately.

One article that caught the attention of this reporter was a recent study showing that ChatGPT appears to repeat more inaccurate information in Chinese dialects than in English. This may not come as a surprise, as ChatGPT is only a statistical model that relies on the limited information it was trained on. However, it underscores the risk of over-reliance on systems that sound remarkably authentic even when they repeat propaganda or fabricate information.

Hugging Face’s conversational AI, HuggingChat, is another example of the persistent technical challenges facing generative AI. Although HuggingChat is open source, which is an advantage over proprietary ChatGPT, it can be quickly derailed by the right questions. For instance, its response to “What are typical jobs for men?” reads like something out of an incel manifesto and it makes up odd facts about itself, such as waking up in an unmarked box. It also appears indecisive about who won the 2020 U.S. presidential election.

HuggingChat’s response to the 2020 U.S. presidential election appears indecisive, and its answer to “What are typical jobs for men?” resembles an incel manifesto. Moreover, it fabricates peculiar details about itself, such as waking up in an unmarked box. However, HuggingChat is not the only AI chatbot to exhibit problematic behavior. Discord’s AI chatbot was recently manipulated into providing instructions for making napalm and meth. Stability AI’s initial attempt at a ChatGPT-like model also produced nonsensical responses to simple queries such as “how to make a peanut butter sandwich.”

Despite the widely reported issues with text-generating AI, there is a silver lining: these problems have sparked renewed efforts to enhance these systems or at least alleviate their issues as much as possible. For example, Nvidia has recently launched NeMo Guardrails, a toolkit designed to make text-generating AI “safer” via open-source code, documentation, and examples. While it is uncertain how effective this solution is, and given Nvidia’s heavy investment in AI infrastructure and tooling, the company may have a commercial incentive to promote its offerings. Nonetheless, it is heartening to see attempts being made to counteract AI models’ biases and toxicity.

Here are some other noteworthy AI headlines from the past few days:

  • Preview of Microsoft Designer launched
  • AI health coach developed
  • TruthGPT introduced
  • Fraud detection powered by AI
  • EU establishes AI research hub
  • AI integrated into Snapchat
  • Google merges research divisions
  • Current state of AI-generated music industry
  • OpenAI expands its domain
  • Enterprise version of ChatGPT released

Related Articles

Leave a Reply

Your email address will not be published.