OpenAI has recently introduced a new research tool called “deep research” that promises to pull information from the web and provide detailed reports, akin to the work of a research analyst. This tool, powered by the o3 model, is designed to cater to the needs of intensive knowledge work in fields such as finance and science.
Unlike its predecessor, Operator, which focused on tasks like shopping for groceries and making reservations, deep research is aimed at more sophisticated endeavors. It can offer personalized recommendations for major purchases like cars and appliances, claiming to accomplish in minutes what would take a human hours.
Available exclusively to subscribers of the $200-per-month ChatGPT Pro plan, deep research operates by scanning through text, images, and PDFs on the internet. While it can take anywhere from five to 30 minutes to generate responses, users can track its progress in real-time through an activity sidebar. Although the current reports are text-based, OpenAI plans to incorporate images and data visualizations soon.
Despite its impressive capabilities, deep research has its limitations. OpenAI acknowledges that, like other large language models, the tool may occasionally fabricate information and struggle to discern between credible sources and rumors. This lack of precision raises concerns about the reliability of the synthesized reports it produces, especially in scientific contexts.
While deep research may expedite the information-gathering process, users must exercise caution and verify the accuracy of its findings. As OpenAI speeds up the development of new AI technologies, the need for vigilant oversight and critical evaluation becomes increasingly crucial to ensure the integrity of the information provided.