← Back to SansadLocal

SansadLocal — User guide

A quick tour of every feature, how to set up AI, how the privacy model works, and what costs (if any) come with each option.

On this page

  1. What is SansadLocal?
  2. The interface at a glance
  3. Browse, search, filter
  4. Reading a report
  5. Setting up AI
  6. Generating an AI summary
  7. Asking questions about a report
  8. Web-search enrichment for Ask
  9. Exporting metadata and summaries
  10. Batch-summarising a committee
  11. Privacy & what's cached locally
  12. Costs & limits — what's free, what isn't
  13. Troubleshooting

1. What is SansadLocal?

A single-file browser app that surfaces every report from India's 24 Departmentally Related Standing Committees (DRSCs) — 16 chaired by the Lok Sabha, 8 by the Rajya Sabha. Reports are scraped daily from sansad.in by an open-source GitHub Action and mirrored as static JSON on GitHub Pages. The app you're using runs entirely in your browser — there is no SansadLocal server, no account, no analytics.

Three things make it different from the upstream ParliamentWatch (whose Python scraper powers the data mirror):

2. The interface at a glance

SansadLocal main view: header with brand, stats, AI pill, settings, help; filter row; toolbar with batch summarise + export; report list with status badges

Main view — top header, filters, toolbar, sortable report list.

The header runs across the top:

SansadLocal Indian Parliamentary Committee Reports 2671 reports · 24 committees · 72 with text AI off ?

3. Browse, search, filter

Below the header, a row of filters narrows the list:

  1. Search — substring match against titles. After the app's background indexer finishes (you'll see "Full-text search ready (N reports)" in the status line), search also matches against body content of every extracted PDF.
  2. All committees — narrow to one of the 24 DRSCs.
  3. All Lok Sabhas — most data is currently from LS18; older terms backfill over time.
  4. All categories — auto-tagged from titles: DFG Demand for Grants · AT Action Taken · SUBJ Subject reports (BILL and ASSURE also exist).
  5. Sort — newest first by default.

Each row also carries a status badge on the right:

4. Reading a report

Report dialog open on a Standing Committee on Coal, Mines and Steel report, Details tab showing committee, report no., dates, PDF links

Report dialog — Details tab. PDF links go straight to sansad.in.

Click any row to open a four-tab dialog:

Details Full text AI summary Ask
If a report shows "Full text has not yet been extracted", it means the mirror's daily action hasn't gotten to it yet. Open the PDF directly via the Details tab in the meantime.

5. Setting up AI

Click the Settings icon in the header (or the AI status pill). Pick one of two modes:

Settings modal: AI mode = Local AI (WebGPU, Gemma); Local AI section with Model dropdown, Status pill, Load + Clear cache buttons

Settings → Local AI section.

Option A — Local AI (default, no key, free)

Runs an open-weight model entirely in your browser via Transformers.js on WebGPU. The first load downloads weights from Hugging Face; the browser caches them so subsequent visits are instant.

ModelSizeNotes
Gemma 4 E2B~1.5 GBDefault. Good balance of quality and download size.
Gemma 4 E4B~4.9 GBStronger summaries; takes longer to download and longer per response.
Ternary Bonsai 1.7B~470 MBSmallest download. Good for low-bandwidth or quick trials.
Ternary Bonsai 4B~1.1 GBSweet spot for quality vs. size on the Bonsai line.
Ternary Bonsai 8B~2.2 GBStrongest Bonsai option. Needs a recent GPU; 64K context.
  1. Open Settings. AI mode = Local AI.
  2. Pick a model from the dropdown.
  3. Click Load. First run shows a progress bar — your machine is downloading weights from Hugging Face.
  4. When the status pill says "<model> ready", you can use AI summary and Ask in any report dialog.
Auto-load on return visits. Once a model is cached, the next time you load SansadLocal it auto-loads in the background — no need to click Load again.
Requirements. WebGPU is needed. Recent Chrome / Edge / Brave (113+), Firefox 130+ on a recent device. If your browser doesn't support WebGPU the local-AI option is disabled and you'll see "No WebGPU" in the pill — switch to BYOK mode.

Option B — BYOK (Bring Your Own Key)

Settings modal: AI mode = BYOK; BYOK provider section with Provider dropdown (Anthropic), API key field, Model field

Settings → BYOK section. Provider list mirrors upstream ParliamentWatch.

Send the request directly from your browser to the provider you choose. Your key stays in this browser's localStorage — never sent to any other server.

  1. Open Settings. AI mode = BYOK.
  2. Pick a provider. Get a key from the provider's website (links in the table at §12 Costs & limits).
  3. Paste the key into the API key field. Optionally override the default model.
  4. Click Save. The pill turns to "BYOK: <Provider>".

6. Generating an AI summary

Report dialog AI summary tab with a generated summary about a Standing Committee report on AI in Electronics & IT

AI summary tab — a freshly-generated 4-section briefing.

  1. Make sure AI is configured (Local model loaded, or BYOK key saved).
  2. Click any report row to open the dialog.
  3. Switch to the AI summary tab.
  4. Click Generate. The model streams a 4-section plain-English briefing (what it's about, key findings, recommendations, why it matters).
  5. The summary is cached in your browser — opening that report again later shows it instantly. Click Regenerate for a fresh attempt.
Switching tabs while a summary is streaming — say to peek at the Full text — won't lose progress. Switch back and the latest token count is what you'll see.

7. Asking questions about a report

Report dialog Ask tab with system message and a chat input with Send button

Ask tab — chat input scoped to the open report.

  1. From the dialog, switch to the Ask tab.
  2. Type a question. Press Enter or click Send.
  3. The model gets the report's full text (truncated to fit the context window) plus your cached summary if one exists. It answers from the report only.
  4. Each new question + answer is appended to the thread for the duration of the dialog. Closing the dialog resets the thread.
Ask uses any cached AI summary as additional context, so generating a summary first sometimes produces sharper answers — especially for "what does this report recommend?" style questions.

8. Web-search enrichment for Ask

Settings modal Web search section with Provider = Tavily, API key field, Save button

Settings → Web search section.

Optional. When configured, you get a 🌐 button next to Send in the Ask tab. Clicking it does a web search first and feeds the top results into the prompt alongside the report text. Useful for "is there recent news on this?" or "has the government acted on this since the report?" style follow-ups.

ProviderFree tierWhere to get a key
Tavily1,000 queries/monthapp.tavily.com
Brave Search2,000 queries/monthapi.search.brave.com
SearXNG (self-hosted)Unlimited (you run it)Use any public instance with JSON output enabled, or self-host from github.com/searxng/searxng
  1. Settings → Web search section.
  2. Pick a provider. For Tavily / Brave, paste your API key. For SearXNG, paste the instance URL (e.g. https://searx.example.org).
  3. Save. The 🌐 button appears in the Ask tab.

9. Exporting metadata and summaries

Two buttons live in the toolbar above the report list:

Both downloads happen client-side — nothing leaves your machine.

10. Batch-summarising a committee

The ✨ Summarise filtered button runs every report in the current filter that has extracted text but no cached summary. Useful for "summarise everything Defence committee published in LS18".

  1. Filter the list (e.g. Committee = Defence).
  2. Click ✨ Summarise filtered. The button shows the count of eligible reports and an ETA.
  3. Confirm. The dialog closes and the loop runs sequentially. The button updates to "Summarising N/M…".
  4. Click again to stop after the current report. Anything generated so far is saved.
With local AI, each summary takes 30–90 seconds depending on report length and your GPU. Plan accordingly. With a paid BYOK provider, the cost adds up — see §12.

11. Privacy & what's cached locally

SansadLocal has no server. Everything you generate stays in your browser:

What leaves your browser:

What never happens: no analytics, no accounts, no telemetry, no server logs (no server). The page is a static index.html served from GitHub Pages, fronted by Cloudflare for the custom domain.

12. Costs & limits — what's free, what isn't

Three layers of cost. The first is always free; the other two depend on your choice.

Layer 1 — SansadLocal infrastructure

ComponentCostNotes
The app itselfFreeStatic HTML on GitHub Pages.
Daily data scrapeFreeGitHub Actions free tier (public repos = unlimited minutes).
Data hostingFreeGitHub Pages, 100 GB/month soft limit. Current data <500 MB.
Custom domain (Cloudflare)FreeCloudflare DNS + Pages, free tier.

Layer 2 — Local AI inference

ModelCostLimits
Gemma 4 / Ternary Bonsai (any size)FreeRuns on your GPU. Bandwidth: one-time download, then nothing. CPU/GPU time is yours.

Layer 3 — BYOK API providers (optional)

ProviderFree tierPay-as-you-go
Anthropic (Claude) None — paid only. ~$3/M input · $15/M output (Sonnet 4.5). A 50-page summary ≈ $0.05–$0.10.
OpenAI (GPT) None on most models. Some accounts get $5 trial credit. ~$0.15/M input · $0.60/M output (GPT-4o-mini). A summary ≈ $0.005.
Google Gemini Yes. 15 req/min, 1M tokens/day free. Get a key at aistudio.google.com. Above the free tier, ~$0.075/M input · $0.30/M output (Gemini 2.5 Flash).
Groq Yes. Free tier with rate limits (~30 req/min). Inference is very fast (Llama 3.3 70B in seconds). Get a key at console.groq.com. Pay tier available for higher rate limits.
OpenRouter Yes. Free models (look for :free suffix in the model name). Get a key at openrouter.ai. Pay-as-you-go for premium models from many providers.
Ollama Yes — fully free. Runs on your computer. Install from ollama.com, then ollama pull llama3.2. Set OLLAMA_ORIGINS=https://sansadlocal.naklitechie.com when starting Ollama so the browser can reach it.
Custom OpenAI-compatible Whatever your endpoint charges (or doesn't). For self-hosted vLLM / LM Studio / Together.ai etc.

Layer 3b — Web-search enrichment (optional)

ProviderFree tierPay-as-you-go
Tavily1,000 queries/month$10–$80/month for higher tiers.
Brave Search2,000 queries/monthFrom $3/CPM (1,000 queries).
SearXNGFree, self-hosted
Recommended free path for most people. Local AI = Gemma 4 E2B (or Ternary Bonsai 1.7B if low on bandwidth) + Tavily for search enrichment. You'll never see a bill.

13. Troubleshooting

The local model won't load

Generation is slow

Ollama returns CORS errors

Ollama's HTTP API doesn't allow cross-origin requests by default. Set OLLAMA_ORIGINS when starting Ollama:

OLLAMA_ORIGINS="https://sansadlocal.naklitechie.com" ollama serve

If you opened the app via http://localhost:8000 for local dev, set OLLAMA_ORIGINS=http://localhost:8000 instead.

The mirror is missing data

The daily action runs at 04:30 UTC (10:00 IST). If a new report shows up on sansad.in mid-day, it'll appear in SansadLocal the next morning. Settings → Refresh from mirror force-fetches; on a fresh visit the latest data loads automatically.

Where do credits live?

Help modal Credits tab showing Built on top of ParliamentWatch, open-source pieces (Transformers.js, Gemma 4 ONNX, pypdf, GitHub Pages), built by Chirag Patnaik, source links

Help → Credits tab. Open from the ? button in the header.

Built on top of ParliamentWatch by Pranay Kotasthane — the scraper, committee config, and original idea are his. SansadLocal repackages it with on-device AI. Full credit list in the Help → Credits tab.