Share

外观
风格

RSS 日报:让 AI 帮你读订阅

2026年2月8日 · 专栏

封面图

RSS 日报:让 AI 帮你读订阅

我订阅了 90 多个博客,但 Reeder 里永远是 200+ 未读。焦虑,关掉,明天继续焦虑。

后来写了个脚本:每天早 8 点自动跑一遍,把昨天的更新压成一份 3 分钟能扫完的日报。跑了两个月,RSS 从"收藏夹坟场"变成了真正在用的输入源。

一句话版本:抓 RSS → AI 翻译+摘要 → 输出 Markdown 日报 → 自动同步到 Obsidian。本地跑,不依赖第三方。


效果长这样

# RSS 日报 2026-02-08

### The One Billion Row Challenge | 十亿行挑战
*antirez* · [原文](https://antirez.com/news/...)
> 用不同语言实现十亿行数据聚合,Java 最快方案 1.5 秒。作者分析了 SIMD 和内存映射的关键优化点。

---

### Why I Still Use RSS | 为什么我还在用 RSS
*Pluralistic* · [原文](https://pluralistic.net/...)
> Cory Doctorow 解释 RSS 在算法时代的价值:你选择看什么,而不是平台替你选。

90 篇 → 扫 3 分钟 → 挑 2-3 篇细读。


5 分钟跑通

mkdir rss-digest && cd rss-digest
bun init -y && bun add rss-parser
mkdir -p data output src

环境变量(用你自己的 API):

export OPENAI_API_BASE="<REDACTED_TOKEN>"
export OPENAI_API_KEY="sk-..."
export SUMMARIZER_MODEL="gpt-4o-mini"

创建 data/feeds.json

{"feeds": [
  {"name": "antirez", "url": "http://antirez.com/rss"},
  {"name": "Pluralistic", "url": "https://pluralistic.net/feed/"}
]}

创建 src/index.ts(完整代码见下方),然后:

bun run src/index.ts

看到 ✅ 日报已生成 就成了。


核心代码(60 行,点击展开)
import Parser from "rss-parser";
import { readFile, writeFile, mkdir } from "fs/promises";

const parser = new Parser({ timeout: 15000 });
const API_BASE = process.env.OPENAI_API_BASE || "<REDACTED_TOKEN>";
const API_KEY = process.env.OPENAI_API_KEY || "";
const MODEL = process.env.SUMMARIZER_MODEL || "gpt-4o-mini";

async function main() {
  const { feeds } = JSON.parse(await readFile("./data/feeds.json", "utf-8"));

  // 抓取
  const articles = (await Promise.all(
    feeds.map(async (f: any) => {
      try {
        const result = await parser.parseURL(f.url);
        return result.items.map(item => ({
          feed: f.name,
          title: item.title || "",
          link: item.link || "",
          date: new Date(item.pubDate || 0),
          content: item.contentSnippet || ""
        }));
      } catch { return []; }
    })
  )).flat();

  // 过滤最近 1 天
  const since = new Date(Date.now() - 86400000);
  const recent = articles.filter(a => a.date >= since);
  if (!recent.length) { console.log("✨ 今天没有新文章"); return; }

  // AI 摘要
  const resp = await fetch(`${API_BASE}/chat/completions`, {
    method: "POST",
    headers: { "Content-Type": "application/json", Authorization: `Bearer ${API_KEY}` },
    body: JSON.stringify({
      model: MODEL,
      messages: [
        { role: "system", content: "翻译标题+1句摘要,返回 JSON: [{index,titleZh,summary}]" },
        { role: "user", content: recent.map((a,i) => `[${i+1}] ${a.title}\n${a.content.slice(0,500)}`).join("\n---\n") }
      ]
    })
  });
  const summaries = JSON.parse((await resp.json()).choices?.[0]?.message?.content?.match(/\[[\s\S]*\]/)?.[0] || "[]");

  // 生成日报
  const today = new Date().toISOString().split("T")[0];
  let md = `# RSS 日报 ${today}\n\n`;
  recent.forEach((a, i) => {
    const s = summaries.find((x:any) => x.index === i+1) || {};
    md += `### ${a.title}${s.titleZh ? ` | ${s.titleZh}` : ""}\n*${a.feed}* · [原文](${a.link})\n> ${s.summary || ""}\n\n---\n\n`;
  });

  await mkdir("./output", { recursive: true });
  await writeFile(`./output/${today}.md`, md);
  console.log(`✅ 日报已生成:output/${today}.md`);
}

main().catch(console.error);

同步到 Obsidian

日报生成后自动复制到 Obsidian vault:

# 在脚本末尾添加,或单独执行
OBSIDIAN_VAULT="$HOME/Documents/Obsidian/YourVault"
cp ./output/*.md "$OBSIDIAN_VAULT/30 Inbox/RSS/"

或者更优雅的方式——直接把 output/ 目录设为 Obsidian vault 的子目录:

# 修改环境变量,让日报直接写入 Obsidian
export OUTPUT_DIR="$HOME/Documents/Obsidian/YourVault/30 Inbox/RSS"

这样每天生成的日报自动出现在 Obsidian 里,打开就能看。


我踩过的坑

1. 有些 RSS 源经常超时

国内网络抓 Substack 容易挂。我的做法是给 parser 加 15 秒超时,挂了就跳过,不影响其他源。

2. AI 返回格式偶尔炸

模型有时候返回 ```json 包裹或者多余解释。用正则 /\[[\s\S]*\]/ 提取数组,比 JSON.parse 直接解析稳。

3. 文章太多时 token 爆了

一次扔 50 篇进去容易超限。现在改成每 20 篇一批,或者用更长上下文的模型。


值得订阅的源(我真的在看)

类别 推荐
AI/LLM Simon Willison, Gary Marcus, Gwern
系统/底层 antirez, rachelbythebay, The Old New Thing, Eli Bendersky
安全 Krebs on Security, Troy Hunt, lcamtuf
独立思考 Pluralistic, Paul Graham, Daring Fireball

完整 91 个源列表见文末附录。


下一步

  • 定时跑crontab -e 加一行 <REDACTED_TOKEN>
  • 去重:加个 state.json 记录处理过的 URL
  • 全文抓取:用 https://r.jina.ai/<url> 拿完整正文

先跑通,再折腾。能每天稳定产出日报,就已经赢过「收藏了从不看」的状态。


附录:给 AI 的复现指令

复制下面这段给 Claude/GPT,让它帮你从零搭建:

帮我搭建一个 RSS 日报生成器,要求每天定时运行并同步到 Obsidian。

目标:每天抓取 RSS 订阅,用 AI 生成中文摘要,输出 Markdown 日报,自动同步到 Obsidian vault。

技术栈:Bun + TypeScript,依赖 rss-parser

项目结构:
rss-digest/
  data/feeds.json      -- 订阅源列表 {"feeds": [{"name": "xxx", "url": "..."}]}
  data/state.json      -- 已处理 URL(去重用)
  src/index.ts         -- 主入口
  output/              -- 日报输出目录(可直接指向 Obsidian vault)

环境变量:
  OPENAI_API_BASE  -- API 地址(默认 <REDACTED_TOKEN>)
  OPENAI_API_KEY   -- 密钥
  SUMMARIZER_MODEL -- 模型名(默认 gpt-4o-mini)
  OUTPUT_DIR       -- 输出目录(可设为 Obsidian vault 路径)

核心逻辑:
1. 读取 feeds.json,并发抓取所有 RSS feed(超时 15 秒,失败跳过)
2. 过滤最近 1 天的文章,排除 state.json 里已处理的 URL
3. 每 20 篇一批调用 OpenAI API,生成中文标题翻译 + 1 句摘要,返回 JSON 数组
4. 生成 Markdown 日报,格式:标题 | 中文标题 + 来源 + 原文链接 + 摘要引用块
5. 保存到 OUTPUT_DIR/YYYY-MM-DD.md,更新 state.json

定时任务(macOS launchd 或 crontab):
- 每天早 8 点自动运行
- 提供 crontab 示例:<REDACTED_TOKEN> >> <REDACTED_TOKEN>/logs/digest.log 2>&1
- 提供 macOS launchd plist 模板(可选)

Obsidian 同步:
- OUTPUT_DIR 直接设为 Obsidian vault 的子目录(如 ~/Documents/Obsidian/Vault/30 Inbox/RSS/)
- 或者脚本结束后用 cp 命令复制到 vault

运行命令:
  bun init -y && bun add rss-parser
  bun run src/index.ts

成功标志:终端显示「日报已生成」,Obsidian vault 里出现当天日期的 .md 文件

请创建完整项目结构和代码,包括 crontab 配置说明。

附录:完整订阅源(91 个)

点击展开完整列表
{
  "feeds": [
    {"name": "simonwillison.net", "url": "https://simonwillison.net/atom/everything/"},
    {"name": "jeffgeerling.com", "url": "https://www.jeffgeerling.com/blog.xml"},
    {"name": "seangoedecke.com", "url": "https://www.seangoedecke.com/rss.xml"},
    {"name": "krebsonsecurity.com", "url": "https://krebsonsecurity.com/feed/"},
    {"name": "daringfireball.net", "url": "https://daringfireball.net/feeds/main"},
    {"name": "ericmigi.com", "url": "https://ericmigi.com/rss.xml"},
    {"name": "antirez.com", "url": "http://antirez.com/rss"},
    {"name": "idiallo.com", "url": "https://idiallo.com/feed.rss"},
    {"name": "maurycyz.com", "url": "https://maurycyz.com/index.xml"},
    {"name": "pluralistic.net", "url": "https://pluralistic.net/feed/"},
    {"name": "shkspr.mobi", "url": "https://shkspr.mobi/blog/feed/"},
    {"name": "lcamtuf.substack.com", "url": "https://lcamtuf.substack.com/feed"},
    {"name": "mitchellh.com", "url": "https://mitchellh.com/feed.xml"},
    {"name": "dynomight.net", "url": "https://dynomight.net/feed.xml"},
    {"name": "utcc.utoronto.ca", "url": "https://utcc.utoronto.ca/~cks/space/blog/?atom"},
    {"name": "xeiaso.net", "url": "https://xeiaso.net/blog.rss"},
    {"name": "devblogs.microsoft.com/oldnewthing", "url": "https://devblogs.microsoft.com/oldnewthing/feed"},
    {"name": "righto.com", "url": "https://www.righto.com/feeds/posts/default"},
    {"name": "lucumr.pocoo.org", "url": "https://lucumr.pocoo.org/feed.atom"},
    {"name": "skyfall.dev", "url": "https://skyfall.dev/rss.xml"},
    {"name": "garymarcus.substack.com", "url": "https://garymarcus.substack.com/feed"},
    {"name": "rachelbythebay.com", "url": "https://rachelbythebay.com/w/atom.xml"},
    {"name": "overreacted.io", "url": "https://overreacted.io/rss.xml"},
    {"name": "timsh.org", "url": "https://timsh.org/rss/"},
    {"name": "johndcook.com", "url": "https://www.johndcook.com/blog/feed/"},
    {"name": "gilesthomas.com", "url": "https://gilesthomas.com/feed/rss.xml"},
    {"name": "matklad.github.io", "url": "https://matklad.github.io/feed.xml"},
    {"name": "derekthompson.org", "url": "https://www.theatlantic.com/feed/author/derek-thompson/"},
    {"name": "evanhahn.com", "url": "https://evanhahn.com/feed.xml"},
    {"name": "terriblesoftware.org", "url": "https://terriblesoftware.org/feed/"},
    {"name": "rakhim.exotext.com", "url": "https://rakhim.exotext.com/rss.xml"},
    {"name": "joanwestenberg.com", "url": "https://joanwestenberg.com/rss"},
    {"name": "xania.org", "url": "https://xania.org/feed"},
    {"name": "micahflee.com", "url": "https://micahflee.com/feed/"},
    {"name": "nesbitt.io", "url": "https://nesbitt.io/feed.xml"},
    {"name": "construction-physics.com", "url": "https://www.construction-physics.com/feed"},
    {"name": "tedium.co", "url": "https://feed.tedium.co/"},
    {"name": "susam.net", "url": "https://susam.net/feed.xml"},
    {"name": "entropicthoughts.com", "url": "https://entropicthoughts.com/feed.xml"},
    {"name": "hillelwayne", "url": "https://buttondown.com/hillelwayne/rss"},
    {"name": "dwarkesh.com", "url": "https://www.dwarkeshpatel.com/feed"},
    {"name": "borretti.me", "url": "https://borretti.me/feed.xml"},
    {"name": "wheresyoured.at", "url": "https://www.wheresyoured.at/rss/"},
    {"name": "jayd.ml", "url": "https://jayd.ml/feed.xml"},
    {"name": "minimaxir.com", "url": "https://minimaxir.com/index.xml"},
    {"name": "geohot.github.io", "url": "https://geohot.github.io/blog/feed.xml"},
    {"name": "paulgraham.com", "url": "http://www.aaronsw.com/2002/feeds/pgessays.rss"},
    {"name": "filfre.net", "url": "https://www.filfre.net/feed/"},
    {"name": "blog.jim-nielsen.com", "url": "https://blog.jim-nielsen.com/feed.xml"},
    {"name": "dfarq.homeip.net", "url": "https://dfarq.homeip.net/feed/"},
    {"name": "jyn.dev", "url": "https://jyn.dev/atom.xml"},
    {"name": "geoffreylitt.com", "url": "https://www.geoffreylitt.com/feed.xml"},
    {"name": "downtowndougbrown.com", "url": "https://www.downtowndougbrown.com/feed/"},
    {"name": "brutecat.com", "url": "https://brutecat.com/rss.xml"},
    {"name": "eli.thegreenplace.net", "url": "https://eli.thegreenplace.net/feeds/all.atom.xml"},
    {"name": "abortretry.fail", "url": "https://www.abortretry.fail/feed"},
    {"name": "fabiensanglard.net", "url": "https://fabiensanglard.net/rss.xml"},
    {"name": "oldvcr.blogspot.com", "url": "https://oldvcr.blogspot.com/feeds/posts/default"},
    {"name": "bogdanthegeek.github.io", "url": "https://bogdanthegeek.github.io/blog/index.xml"},
    {"name": "hugotunius.se", "url": "https://hugotunius.se/feed.xml"},
    {"name": "gwern.net", "url": "https://gwern.substack.com/feed"},
    {"name": "berthub.eu", "url": "https://berthub.eu/articles/index.xml"},
    {"name": "chadnauseam.com", "url": "https://chadnauseam.com/rss.xml"},
    {"name": "simone.org", "url": "https://simone.org/feed/"},
    {"name": "it-notes.dragas.net", "url": "https://it-notes.dragas.net/feed/"},
    {"name": "beej.us", "url": "https://beej.us/blog/rss.xml"},
    {"name": "hey.paris", "url": "https://hey.paris/index.xml"},
    {"name": "danielwirtz.com", "url": "https://danielwirtz.com/rss.xml"},
    {"name": "matduggan.com", "url": "https://matduggan.com/rss/"},
    {"name": "refactoringenglish.com", "url": "https://refactoringenglish.com/index.xml"},
    {"name": "worksonmymachine.substack.com", "url": "https://worksonmymachine.substack.com/feed"},
    {"name": "philiplaine.com", "url": "https://philiplaine.com/index.xml"},
    {"name": "steveblank.com", "url": "https://steveblank.com/feed/"},
    {"name": "bernsteinbear.com", "url": "https://bernsteinbear.com/feed.xml"},
    {"name": "danieldelaney.net", "url": "https://danieldelaney.net/feed"},
    {"name": "troyhunt.com", "url": "https://www.troyhunt.com/rss/"},
    {"name": "herman.bearblog.dev", "url": "https://herman.bearblog.dev/feed/"},
    {"name": "tomrenner.com", "url": "https://tomrenner.com/index.xml"},
    {"name": "blog.pixelmelt.dev", "url": "https://blog.pixelmelt.dev/rss/"},
    {"name": "martinalderson.com", "url": "https://martinalderson.com/feed.xml"},
    {"name": "danielchasehooper.com", "url": "https://danielchasehooper.com/feed.xml"},
    {"name": "chiark.greenend.org.uk", "url": "https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/feed.xml"},
    {"name": "grantslatton.com", "url": "https://grantslatton.com/rss.xml"},
    {"name": "experimental-history.com", "url": "https://www.experimental-history.com/feed"},
    {"name": "anildash.com", "url": "https://anildash.com/feed.xml"},
    {"name": "aresluna.org", "url": "https://aresluna.org/main.rss"},
    {"name": "michael.stapelberg.ch", "url": "https://michael.stapelberg.ch/feed.xml"},
    {"name": "miguelgrinberg.com", "url": "https://blog.miguelgrinberg.com/feed"},
    {"name": "keygen.sh", "url": "https://keygen.sh/blog/feed.xml"},
    {"name": "mjg59.dreamwidth.org", "url": "https://mjg59.dreamwidth.org/data/rss"},
    {"name": "computer.rip", "url": "https://computer.rip/rss.xml"}
  ]
}