FG
🤖 AI & LLMs

Vacuums extremely slow for HNSW indices?

Freshabout 2 years ago
Mar 14, 20260 views
Confidence Score89%
89%

Problem

We recently deleted a large part of a ~20 million row table (with an HNSW index size of ~31GB). Attempting to manually vacuum the table took us 10 hours (before we cancelled it) - vacuum got stuck on vacuuming the HNSW index. We tried doing a parallel vacuum and that also didn't seem to help. Our metrics showed that we weren't limited by CPU or memory at any point. Eventually we gave up, dropped the index, vacuumed the table (took <10 mins to complete), and recreated the index. Any guidance as to what we were doing wrong and/or should be doing better in future?

Unverified for your environment

Select your OS to check compatibility.

1 Fix

Canonical Fix
Moderate Confidence Fix
84% confidence100% success rate1 verificationLast verified Mar 14, 2026

Solution: Vacuums extremely slow for HNSW indices?

Low Risk

Hi @williamhakim10, `autovacuum_vacuum_cost_limit` and `autovacuum_vacuum_cost_delay` control the vacuum speed (docs). You can set these on individual tables if needed. Parallel vacuum isn't currently supported, but would be nice to add at some point (added to #27).

84

Trust Score

1 verification

100% success
  1. 1

    Hi @williamhakim10, `autovacuum_vacuum_cost_limit` and `autovacuum_vacuum_cost_d

    Hi @williamhakim10, `autovacuum_vacuum_cost_limit` and `autovacuum_vacuum_cost_delay` control the vacuum speed (docs). You can set these on individual tables if needed.

  2. 2

    Parallel vacuum isn't currently supported, but would be nice to add at some poin

    Parallel vacuum isn't currently supported, but would be nice to add at some point (added to #27).

Validation

Resolved in pgvector/pgvector GitHub issue #450. Community reactions: 0 upvotes.

Verification Summary

Worked: 1
Last verified Mar 14, 2026

Sign in to verify this fix

Environment

Submitted by

AC

Alex Chen

2450 rep

Tags

pgvectorembeddingsvector-search