
can.code provides researchers with a voice-first AI interaction platform to study problem-solving with conversational agents. It combines voice input with text chat and analyzes outcomes using real-time audio processing, conversation visualization, and exportable datasets. The stack includes a full-stack Next.js application and MongoDB for conversation corpus analysis, plus streaming AI responses and robust session management. The platform investigates boundaries of conversational AI, including recognition accuracy, context drift, and cognitive load across modes. Targeted at research teams and institutions, can.code scales as a B2B SaaS tool for experimental UX and multi-agent studies.

can.code provides researchers with a voice-first AI interaction platform to study problem-solving with conversational agents. It combines voice input with text chat and analyzes outcomes using real-time audio processing, conversation visualization, and exportable datasets. The stack includes a full-stack Next.js application and MongoDB for conversation corpus analysis, plus streaming AI responses and robust session management. The platform investigates boundaries of conversational AI, including recognition accuracy, context drift, and cognitive load across modes. Targeted at research teams and institutions, can.code scales as a B2B SaaS tool for experimental UX and multi-agent studies.