Skip to content

Avoid panic bad TL on malformed Tf/TL; skip operator instead#8

Open
BrennenWright wants to merge 11 commits intodslipak:masterfrom
BrennenWright:master
Open

Avoid panic bad TL on malformed Tf/TL; skip operator instead#8
BrennenWright wants to merge 11 commits intodslipak:masterfrom
BrennenWright:master

Conversation

@BrennenWright
Copy link
Copy Markdown

This PR prevents panics when a content stream uses Tf (set text font and size)
or TL (set text leading) with an unexpected number of arguments.

Today, page.go does:

case "Tf":
    if len(args) != 2 {
        panic("bad TL")
    }
    ...

case "TL":
    if len(args) != 1 {
        panic("bad TL")
    }

Some real-world PDFs produced by use non-standard or malformed
Tf/TL, which causes a panic ("bad TL") and terminates extraction, even
though the rest of the page is readable.

This change makes those cases non-fatal:

- If Tf/TL have the wrong arg count, we simply return from the handler
  and continue interpreting the rest of the stream.

This matches the library’s general behavior of ignoring unknown or
unsupported operators instead of panicking, and allows callers to
still extract partial text from PDFs with slightly malformed content
streams.

I’ve tested this against a PDF that previously panicked ("bad TL"); with
this change the panic is gone and text extraction now succeeds.

BrennenWright and others added 11 commits March 10, 2026 08:32
The panic fault on bad Tf and TL should be a skip for general parsing as a panic makes the library unusable.
Stop lexer loops on malformed/incomplete dictionaries and strings at EOF, and normalize TJ handling so malformed streams do not stall extraction.

Made-with: Cursor
Fix EOF handling to prevent content-stream parse hangs.
Break out when the Pages node count overstates reachable kids so page lookup fails fast instead of spinning forever.

Made-with: Cursor
Skip invalid PostScript def keys, ignore broken compressed substreams, and continue page content extraction so readable text can still be recovered from partially malformed PDFs.

Co-authored-by: Cursor <cursoragent@cursor.com>
Stop printing unknown cmap operators to stdout so malformed PDFs do not flood application logs during extraction.

Co-authored-by: Cursor <cursoragent@cursor.com>
Decode PNG predictor filter bytes 0-4 (None/Sub/Up/Average/Paeth) instead of assuming only Up, and fall back safely on malformed filter bytes to keep PDF extraction progressing.

Co-authored-by: Cursor <cursoragent@cursor.com>
Normalize DecodeParms Columns to xref row width when using PNG predictors so malformed producer metadata no longer causes premature EOF while reading cross-reference streams.

Co-authored-by: Cursor <cursoragent@cursor.com>
Relax header validation to parse the full first line and validate %PDF-1.x without requiring a newline at a fixed byte offset, which fixes valid files like SF1449 that include a space after the version token.

Co-authored-by: Cursor <cursoragent@cursor.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant