Loop Breaker AI uses Groq's LLM inference API (running Llama 3) to analyse loop descriptions and generate break systems. Every analysis is generated fresh for your specific input - we do not use templates or pre-written advice. Each time you submit a loop description, a new inference call is made with your exact text, producing a response unique to your situation.
Groq provides extremely fast LLM inference, which means you get your analysis in seconds rather than minutes. Speed matters for a behaviour-change tool - the gap between describing a problem and seeing the analysis should be as small as possible. A slow analysis breaks the moment of reflection and reduces the likelihood that insight turns into action.
All AI responses are constrained to return structured JSON only - no free-form prose. This ensures the analysis is consistent, parseable, and directly rendered into the UI without editing. The AI is instructed to identify specific named fields (trigger, emotional hook, break point, reward, pattern) and to return break system steps as an ordered array. This structure is what makes the output immediately actionable rather than merely interesting.
AI pattern detection is not perfect. If an analysis does not resonate with you, you can re-submit with more detail - specificity is the single biggest factor in analysis quality. Your lived experience of the loop is always the ground truth. The AI is a thinking tool, not an authority. It surfaces structure. You decide whether that structure fits.