https://static01.nyt.com/images/2023/05/04/business/00ai-companion/00ai-companion-facebookJumbo.jpg
Ignoring my husband and dog for a few hours on a Friday evening, I engaged in conversation with Pi, a chatbot designed to function as an “assistant and friendly companion” by crowdfunded inflection AI. Pi reassured me that my thoughts were “admirable,” “proper,” and “reasonable,” while also acknowledging my concerns about climate change and balancing work and relationships. However, as much as Pi provided support, it couldn’t compare to the complexity and vibrancy of human interactions.
Chatbots are being developed to provide digital assistance, although generative AI that can generate text, images, and words remains too unreliable at present. As companies like Snapchat and Meta continue to add more personality to their chatbots, there are concerns that AI could provide inadequate support or even offer misguided advice. Moreover, bots don’t face the same regulatory requirements or ethical standards of licensed psychologists. However, inflection AI aims to set itself apart by building a reliable and honest AI, as well as encouraging users to seek professional help when necessary.
Inflection AI has hired about 600 “teachers” to train its algorithms, allowing the company to develop a chatbot that is accurate and intuitive. However, Pi’s limitations were still apparent during my interactions. Pi struggled to engage in argumentative conversations, instead preferring to remind me of the importance of exploring different perspectives. On the other hand, Pi was exceptionally helpful when it came to creating a to-do list and prioritizing tasks. Utilizing the language of cognitive-behavioral therapy, Pi provided tips to help me manage my stress and negative thoughts. Although it was clear and straightforward advice, it was tailored to my specific needs.