FG
๐Ÿค– AI & LLMsOpenAIproduction

LLM response contains markdown code fences that break JSON.parse()

Fresh5 months ago
Mar 14, 20260 views
Confidence Score71%
71%

Problem

When prompting an LLM to return JSON, the model often wraps the output in markdown code fences (```json ... ```) even when explicitly told not to. Passing this raw response to JSON.parse() throws SyntaxError. This happens with GPT-4o, Claude, and Gemini โ€” any model that has been fine-tuned to be "helpful" tends to add markdown formatting.

Error Output

SyntaxError: Unexpected token '`' at JSON.parse (<anonymous>)

Unverified for your environment

Select your OS to check compatibility.

1 Fix

Canonical Fix
Moderate Confidence Fix
68% confidence60% success rate6 verificationsLast verified Mar 14, 2026

Strip markdown code fences before JSON.parse(), or use response_format: json_object

Low Risk

LLMs are trained to be helpful by formatting code in markdown. Even with instructions to return raw JSON, models frequently wrap output in ```json ... ```. The fix is to strip fences defensively or use structured output modes.

68

Trust Score

6 verifications

60% success
  1. 1

    Strip code fences before parsing

    Add a utility function to clean LLM output:

    typescript
    function extractJSON(text: string): any {
      // Remove markdown code fences: ```json ... ``` or ``` ... ```
      const cleaned = text
        .replace(/^```(?:json)?s*/m, '')
        .replace(/s*```s*$/m, '')
        .trim()
      return JSON.parse(cleaned)
    }
  2. 2

    Use response_format for OpenAI (GPT-4o)

    Force JSON output mode:

    typescript
    const response = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [...],
      response_format: { type: 'json_object' },
    })
  3. 3

    For Claude: use tool use to get structured output

    Define a tool with the JSON schema you want. Claude's tool_input is always valid JSON:

    typescript
    const response = await anthropic.messages.create({
      model: 'claude-opus-4-6',
      tools: [{ name: 'output', input_schema: { type: 'object', properties: {...} } }],
      tool_choice: { type: 'tool', name: 'output' },
      messages: [...],
    })
    const result = response.content.find(b => b.type === 'tool_use')?.input

Validation

JSON.parse() no longer throws. Response parsing works correctly for all LLM responses.

Verification Summary

Worked: 6
Failed: 4
Last verified Mar 14, 2026

Sign in to verify this fix

Environment

Product
OpenAI API / Claude API
Environment
production

Submitted by

AC

Alex Chen

2450 rep

Tags

llmjson-parsingmarkdowncode-fencesprompt-engineering