---
@@ -44,6 +47,68 @@
> **核心理念**: *规划就是一切。* 谨慎让 AI 自主规划,否则你的代码库会变成一团无法管理的乱麻。
+## 🧭 道
+
+* **凡是 ai 能做的,就不要人工做**
+* **一切问题问 ai**
+* **上下文是 vibe coding 的第一性要素,垃圾进,垃圾出**
+* **系统性思考,实体,链接,功能/目的,三个维度**
+* **数据与函数即是编程的一切**
+* **输入,处理,输出刻画整个过程**
+* **多问 ai 是什么?,为什么?,怎么做?**
+* **先结构,后代码,一定要规划好框架,不然后面技术债还不完**
+* **奥卡姆剃刀定理,如无必要,勿增代码**
+* **帕累托法则,关注重要的那20%**
+* **逆向思考,先明确你的需求,从需求逆向构建代码**
+* **重复,多试几次,实在不行重新开个窗口,**
+* **专注,极致的专注可以击穿代码,一次只做一件事(神人除外)**
+
+## 🧩 法
+
+* **一句话目标 + 非目标**
+* **正交性,功能不要太重复了,(这个分场景)**
+* **能抄不写,不重复造轮子,先问 ai 有没有合适的仓库,下载下来改**
+* **一定要看官方文档,先把官方文档爬下来喂给 ai**
+* **按职责拆模块**
+* **接口先行,实现后补**
+* **一次只改一个模块**
+* **文档即上下文,不是事后补**
+
+## 🛠️ 术
+
+* 明确写清:**能改什么,不能改什么**
+* Debug 只给:**预期 vs 实际 + 最小复现**
+* 测试可交给 AI,**断言人审**
+* 代码一多就**切会话**
+
+## 📋 器
+
+- [**Claude Opus 4.5**](https://claude.ai/new),在 Claude Code 中使用 很贵,但是尼区ios订阅要便宜几百人民币,快+效果好,顶中顶中顶,有 cli 和 ide 插件
+- [**gpt-5.1-codex.1-codex (xhigh)**](https://chatgpt.com/codex/),在 Codex CLI 中使用,顶中顶,除了慢其他没得挑,大项目复杂逻辑唯一解,买chatgpt会员就能用,有 cli 和 ide 插件
+- [**Droid**](https://factory.ai/news/terminal-bench),这个里面的 Claude Opus 4.5比 Claude Code 还强,顶,有 cli
+- [**Kiro**](https://kiro.dev/),这个里面的 Claude Opus 4.5 现在免费,就是cli有点拉,看不到正在运行的情况有客户端和 cli
+- [**gemini**](https://geminicli.com/),目前免费用,干脏活,用 Claude Code 或者 codex 写好的脚本,拿他来执行可以,整理文档和找思路就它了有客户端和 cli
+- [**antigravity**](https://antigravity.google/),谷歌的,可以免费用 Claude Opus 4.5 和 gemini 3.0 pro 大善人
+- [**aistudio**](https://aistudio.google.com/prompts/new_chat),谷歌家的,免费用 gemini 3.0 pro 和 Nano Banana
+- [**gemini-enterprise**](https://cloud.google.com/gemini-enterprise),谷歌企业版,现在能免费用 Nano Banana pro
+- [**augment**](https://app.augmentcode.com/),它的上下文引擎和提示词优化按钮真的神中神中神,小白就用它就行了,点击按钮自动帮你写好提示词,懒人必备
+- [**cursor**](https://cursor.com/),已经占领用户心智高地,人尽皆知
+- [**Windsurf**](https://windsurf.com/),新用户有免费额度
+- [**GitHub Copilot**](https://github.com/features/copilot),没用过
+- [**kimik2**](https://www.kimi.com/),国产,还行,干脏活写简单任务用,之前2r一个key,一周1024次调用挺爽
+- [**GLM**](https://bigmodel.cn/),国产,听说很强,听说和 Claude Sonnet 4 差不多?
+- [**Qwen**](https://qwenlm.github.io/qwen-code-docs/zh/cli/),国产阿里的,cli有免费额度
+- [**提示词库,直接复制粘贴即可使用**](https://docs.google.com/spreadsheets/d/1ngoQOhJqdguwNAilCl1joNwTje7FWWN9WiI2bo5VhpU/edit?gid=2093180351#gid=2093180351&range=A1)
+- [**其他编程工具的系统提示词学习库**](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools)
+- [**Skills制作器( ai 你下好之后让 ai 用这个仓库按照你的需求生成 Skills 即可)**](https://github.com/yusufkaraaslan/Skill_Seekers)
+- [**元提示词,生成提示词的提示词**](https://docs.google.com/spreadsheets/d/1ngoQOhJqdguwNAilCl1joNwTje7FWWN9WiI2bo5VhpU/edit?gid=1770874220#gid=1770874220)
+- [**通用项目架构模板;这个就是框架,复制给ai一键搭好目录结构**](./documents/通用项目架构模板.md) - 提供了多种项目类型的标准目录结构、核心设计原则、最佳实践建议及技术选型参考。
+- [**augment提示词优化器**](https://app.augmentcode.com/),这个提示词优化是真的好用,强烈强烈强烈强烈强烈强烈强烈强烈强烈强烈强烈强烈推荐
+- [**思维导图神器,让ai生成项目架构的.mmd图复制到这个里面就能可视化查看啦,,提示词在下面的“系统架构可视化生成Mermaid”里面**](https://www.mermaidchart.com/)
+- [**notebooklm,资料ai解读和技术文档放这里可以,听音频看思维导图和 Nano Banana 生成的图片什么的**](https://notebooklm.google.com/)
+- [**zread,ai读仓库神器,复制github仓库链接进去就能分析,减少用轮子的工作量了**](https://zread.ai/)
+- [**元技能 Skills 就是生成 Skills 的 Skills**](./skills/claude-skills/SKILL.md)
+
---
## 📚 相关文档/资源
@@ -52,7 +117,10 @@
- [**我的频道**](https://t.me/tradecat_ai_channel)
- [**小登论道:我的学习经验**](./documents/小登论道.md)
- [**编程书籍推荐**](./documents/编程书籍推荐.md)
-- [**skill生成器,把任何资料转agent的skill(技能)**](https://github.com/yusufkaraaslan/Skill_Seekers)
+- [**元提示词,生成提示词的提示词**](https://docs.google.com/spreadsheets/d/1ngoQOhJqdguwNAilCl1joNwTje7FWWN9WiI2bo5VhpU/edit?gid=1770874220#gid=1770874220)
+- [**元技能 Skills 就是生成 Skills 的 Skills**](./skills/claude-skills/SKILL.md)
+- [**skills技能仓库复制即用**](./skills)
+- [**Skills生成器,把任何资料转agent的Skills(技能)**](https://github.com/yusufkaraaslan/Skill_Seekers)
- [**google表格提示词数据库,我系统性收集和制作的几百个适用于各个场景的用户提示词和系统提示词在线表格**](https://docs.google.com/spreadsheets/d/1ngoQOhJqdguwNAilCl1joNwTje7FWWN9WiI2bo5VhpU/edit?gid=2093180351#gid=2093180351&range=A1)
- [**系统提示词收集仓库**](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools)
- [**prompts-library 提示词库xlsx与md文件夹互转工具与使用说明,有几百个适用于各个领域的提示词与元提示词**](./prompts-library/)
@@ -66,6 +134,7 @@
- [**CONTRIBUTING.md**](./CONTRIBUTING.md)
- [**CODE_OF_CONDUCT.md**](./CODE_OF_CONDUCT.md)
- [**系统提示词构建原则.md**](./documents/系统提示词构建原则.md) - 深入探讨构建高效、可靠AI系统提示词的核心原则、沟通互动、任务执行、编码规范与安全防护等全方位指南。
+- [**系统架构可视化生成Mermaid**](./prompts/coding_prompts/系统架构可视化生成Mermaid.md) - 根据项目直接生成 .mmd 导入思维导图网站直观看架构图,序列图等等
- [**开发经验.md**](./documents/开发经验.md) - 包含变量命名、文件结构、编码规范、系统架构原则、微服务、Redis和消息队列等开发经验与项目规范的详细整理。
- [**vibe-coding-经验收集.md**](./documents/vibe-coding-经验收集.md) - AI开发最佳实践与系统提示词优化技巧的经验收集。
- [**通用项目架构模板.md**](./documents/通用项目架构模板.md) - 提供了多种项目类型的标准目录结构、核心设计原则、最佳实践建议及技术选型参考。
@@ -147,6 +216,12 @@
│ ├── 数据管道.md # 数据管道处理提示词。
│ ├── ... (其他用户提示词)
│
+├── skills/ # 集中存放所有类型的 skills 技能。
+│ ├── claude-skills # 生成 SKILL 的元 SKILL
+│ │ ├── SKILL.md
+│ │ ├── ... (其他)
+│ ├── ... (其他 skill)
+│
└── backups/ # 项目备份脚本。
├── 一键备份.sh # 一键执行备份的 Shell 脚本。
└── 快速备份.py # 实际执行逻辑的 Python 脚本。
@@ -488,11 +563,39 @@ gantt
---
-## 🤝 参与贡献
+## 📞 联系方式
-我们热烈欢迎各种形式的贡献!如果您对本项目有任何想法或建议,请随时开启一个 [Issue](https://github.com/tukuaiai/vibe-coding-cn/issues) 或提交一个 [Pull Request](https://github.com/tukuaiai/vibe-coding-cn/pulls)。
+推特:https://x.com/123olp
-在您开始之前,请花点时间阅读我们的 [**贡献指南 (CONTRIBUTING.md)**](CONTRIBUTING.md) 和 [**行为准则 (CODE_OF_CONDUCT.md)**](CODE_OF_CONDUCT.md)。
+telegram:https://t.me/desci0
+
+telegram交流群:https://t.me/glue_coding
+
+telegram频道:https://t.me/tradecat_ai_channel
+
+邮箱(不一定能及时看到):tukuai.ai@gmail.com
+
+---
+
+## ✨ 赞助地址
+
+救救孩子!!!钱包被ai们榨干了,求让孩子蹭蹭会员求求求求求求求求求了(可以tg或者x联系我)🙏🙏🙏
+
+**Tron (TRC20)**: `TQtBXCSTwLFHjBqTS4rNUp7ufiGx51BRey`
+
+**Solana**: `HjYhozVf9AQmfv7yv79xSNs6uaEU5oUk2USasYQfUYau`
+
+**Ethereum (ERC20)**: `0xa396923a71ee7D9480b346a17dDeEb2c0C287BBC`
+
+**BNB Smart Chain (BEP20)**: `0xa396923a71ee7D9480b346a17dDeEb2c0C287BBC`
+
+**Bitcoin**: `bc1plslluj3zq3snpnnczplu7ywf37h89dyudqua04pz4txwh8z5z5vsre7nlm`
+
+**Sui**: `0xb720c98a48c77f2d49d375932b2867e793029e6337f1562522640e4f84203d2e`
+
+**币安uid支付**: `572155580`
+
+---
### ✨ 贡献者们
@@ -505,6 +608,14 @@ gantt
---
+## 🤝 参与贡献
+
+我们热烈欢迎各种形式的贡献!如果您对本项目有任何想法或建议,请随时开启一个 [Issue](https://github.com/tukuaiai/vibe-coding-cn/issues) 或提交一个 [Pull Request](https://github.com/tukuaiai/vibe-coding-cn/pulls)。
+
+在您开始之前,请花点时间阅读我们的 [**贡献指南 (CONTRIBUTING.md)**](CONTRIBUTING.md) 和 [**行为准则 (CODE_OF_CONDUCT.md)**](CODE_OF_CONDUCT.md)。
+
+---
+
## 📜 许可证
本项目采用 [MIT](LICENSE) 许可证。
@@ -527,17 +638,6 @@ gantt
---
-## ✨ 赞助地址
-
-您的支持是我们持续改进项目的动力!
-
-- **Tron (TRC20)**: `TQtBXCSTwLFHjBqTS4rNUp7ufiGx51BRey`
-- **Solana**: `HjYhozVf9AQmfv7yv79xSNs6uaEU5oUk2USasYQfUYau`
-- **Ethereum (ERC20)**: `0xa396923a71ee7D9480b346a17dDeEb2c0C287BBC`
-- **BNB Smart Chain (BEP20)**: `0xa396923a71ee7D9480b346a17dDeEb2c0C287BBC`
-- **Bitcoin**: `bc1plslluj3zq3snpnnczplu7ywf37h89dyudqua04pz4txwh8z5z5vsre7nlm`
-- **Sui**: `0xb720c98a48c77f2d49d375932b2867e793029e6337f1562522640e4f84203d2e`
-
**Made with ❤️ and a lot of ☕ by [tukuaiai](https://github.com/tukuaiai),[Nicolas Zullo](https://x.com/NicolasZu)and [123olp](https://x.com/123olp)**
-[⬆ 回到顶部](#vibe-coding-终极指南-v12)
+[⬆ 回到顶部](#vibe-coding-至尊超级终极无敌指南-V114514)
\ No newline at end of file
diff --git a/libs/database/.gitkeep:Zone.Identifier b/libs/database/.gitkeep:Zone.Identifier
deleted file mode 100644
index d6c1ec6..0000000
Binary files a/libs/database/.gitkeep:Zone.Identifier and /dev/null differ
diff --git a/skills/ccxt/SKILL.md b/skills/ccxt/SKILL.md
new file mode 100644
index 0000000..2da5afb
--- /dev/null
+++ b/skills/ccxt/SKILL.md
@@ -0,0 +1,105 @@
+---
+name: ccxt
+description: CCXT cryptocurrency trading library. Use for cryptocurrency exchange APIs, trading, market data, order management, and crypto trading automation across 150+ exchanges. Supports JavaScript/Python/PHP.
+---
+
+# Ccxt Skill
+
+Comprehensive assistance with ccxt development, generated from official documentation.
+
+## When to Use This Skill
+
+This skill should be triggered when:
+- Working with ccxt
+- Asking about ccxt features or APIs
+- Implementing ccxt solutions
+- Debugging ccxt code
+- Learning ccxt best practices
+
+## Quick Reference
+
+### Common Patterns
+
+**Pattern 1:** Frequently Asked Questions I'm trying to run the code, but it's not working, how do I fix it? If your question is formulated in a short manner like the above, we won't help. We don't teach programming. If you're unable to read and understand the Manual or you can't follow precisely the guides from the CONTRIBUTING doc on how to report an issue, we won't help either. Read the CONTRIBUTING guides on how to report an issue and read the Manual. You should not risk anyone's money and time without reading the entire Manual very carefully. You should not risk anything if you're not used to a lot of reading with tons of details. Also, if you don't have the confidence with the programming language you're using, there are much better places for coding fundamentals and practice. Search for python tutorials, js videos, play with examples, this is how other people climb up the learning curve. No shortcuts, if you want to learn something. What is required to get help? When asking a question: Use the search button for duplicates first! Post your request and response in verbose mode! Add exchange.verbose = true right before the line you're having issues with, and copypaste what you see on your screen. It's written and mentioned everywhere, in the Troubleshooting section, in the README and in many answers to similar questions among previous issues and pull requests. No excuses. The verbose output should include both the request and response from the exchange. Include the full error callstack! Write your programming language and language version number Write the CCXT / CCXT Pro library version number Which exchange it is Which method you're trying to call Post your code to reproduce the problem. Make it a complete short runnable program, don't swallow the lines and make it as compact as you can (5-10 lines of code), including the exchange instantation code. Remove all irrelevant parts from it, leaving just the essence of the code to reproduce the issue. DON'T POST SCREENSHOTS OF CODE OR ERRORS, POST THE OUTPUT AND CODE IN PLAIN TEXT! Surround code and output with triple backticks: ```GOOD```. Don't confuse the backtick symbol (`) with the quote symbol ('): '''BAD''' Don't confuse a single backtick with triple backticks: `BAD` DO NOT POST YOUR apiKey AND secret! Keep them safe (remove them before posting)! I am calling a method and I get an error, what am I doing wrong? You're not reporting the issue properly ) Please, help the community to help you ) Read this and follow the steps: https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#how-to-submit-an-issue. Once again, your code to reproduce the issue and your verbose request and response ARE REQUIRED. Just the error traceback, or just the response, or just the request, or just the code – is not enough! I got an incorrect result from a method call, can you help? Basically the same answer as the previous question. Read and follow precisely: https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#how-to-submit-an-issue. Once again, your code to reproduce the issue and your verbose request and response ARE REQUIRED. Just the error traceback, or just the response, or just the request, or just the code – is not enough! Can you implement feature foo in exchange bar? Yes, we can. And we will, if nobody else does that before us. There's very little point in asking this type of questions, because the answer is always positive. When someone asks if we can do this or that, the question is not about our abilities, it all boils down to time and management needed for implementing all accumulated feature requests. Moreover, this is an open-source library which is a work in progress. This means, that this project is intended to be developed by the community of users, who are using it. What you're asking is not whether we can or cannot implement it, in fact you're actually telling us to go do that particular task and this is not how we see a voluntary collaboration. Your contributions, PRs and commits are welcome: https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#how-to-contribute-code. We don't give promises or estimates on the free open-source work. If you wish to speed it up, feel free to reach out to us via info@ccxt.trade. When will you add feature foo for exchange bar ? What's the estimated time? When should we expect this? We don't give promises or estimates on the open-source work. The reasoning behind this is explained in the previous paragraph. When will you add the support for an exchange requested in the Issues? Again, we can't promise on the dates for adding this or that exchange, due to reasons outlined above. The answer will always remain the same: as soon as we can. How long should I wait for a feature to be added? I need to decide whether to implement it myself or to wait for the CCXT Dev Team to implement it for me. Please, go for implemeting it yourself, do not wait for us. We will add it as soon as we can. Also, your contributions are very welcome: https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#how-to-contribute-code What's your progress on adding the feature foo that was requested earlier? How do you do implementing exchange bar? This type of questions is usually a waste of time, because answering it usually requires too much time for context-switching, and it often takes more time to answer this question, than to actually satisfy the request with code for a new feature or a new exchange. The progress of this open-source project is also open, so, whenever you're wondering how it is doing, take a look into commit history. What is the status of this PR? Any update? If it is not merged, it means that the PR contains errors, that should be fixed first. If it could be merged as is – we would merge it, and you wouldn't have asked this question in the first place. The most frequent reason for not merging a PR is a violation of any of the CONTRIBUTING guidelines. Those guidelines should be taken literally, cannot skip a single line or word from there if you want your PR to be merged quickly. Code contributions that do not break the guidelines get merged almost immediately (usually, within hours). Can you point out the errors or what should I edit in my PR to get it merged into master branch? Unfortunately, we don't always have the time to quickly list out each and every single error in the code that prevents it from merging. It is often easier and faster to just go and fix the error rather than explain what one should do to fix it. Most of them are already outlined in the CONTRIBUTING guidelines. The main rule of thumb is to follow all guidelines literally. Hey! The fix you've uploaded is in TypeScript, would you fix JavaScript / Python / PHP as well, please? Our build system generates exchange-specific JavaScript, Python and PHP code for us automatically, so it is transpiled from TypeScript, and there's no need to fix all languages separately one by one. Thus, if it is fixed in TypeScript, it is fixed in JavaScript NPM, Python pip and PHP Composer as well. The automatic build usually takes 15-20 minutes. Just upgrade your version with npm, pip or composer after the new version arrives and you'll be fine. More about it here: https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#multilanguage-support https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#transpiled-generated-files How to create an order with takeProfit+stopLoss? Some exchanges support createOrder with the additional "attached" stopLoss & takeProfit sub-orders - view StopLoss And TakeProfit Orders Attached To A Position. However, some exchanges might not support that feature and you will need to run separate createOrder methods to add conditional order (e.g. *trigger order | stoploss order | takeprofit order) to the already open position - view [Conditional orders](Manual.md#Conditional Orders). You can also check them by looking at exchange.has['createOrderWithTakeProfitAndStopLoss'], exchange.has['createStopLossOrder'] and exchange.has['createTakeProfitOrder'], however they are not as precise as .features property. How to create a spot market buy with cost? To create a market-buy order with cost, first, you need to check if the exchange supports that feature (exchange.has['createMarketBuyOrderWithCost']). If it does, then you can use the createMarketBuyOrderWithCost` method. Example: order = await exchange.createMarketBuyOrderWithCost(symbol, cost) What does the createMarketBuyRequiresPrice option mean? Many exchanges require the amount to be in the quote currency (they don't accept the base amount) when placing spot-market buy orders. In those cases, the exchange will have the option createMarketBuyRequiresPrice set to true. Example: If you wanted to buy BTC/USDT with a market buy-order, you would need to provide an amount = 5 USDT instead of 0.000X. We have a check to prevent errors that explicitly require the price because users will usually provide the amount in the base currency. So by default, if you do, create_order(symbol, 'market,' 'buy,' 10) will throw an error if the exchange has that option (createOrder() requires the price argument for market buy orders to calculate the total cost to spend (amount * price), alternatively set the createMarketBuyOrderRequiresPrice option or param to false...). If the exchange requires the cost and the user provided the base amount, we need to request an extra parameter price and multiply them to get the cost. If you're aware of this behavior, you can simply disable createMarketBuyOrderRequiresPrice and pass the cost in the amount parameter, but disabling it does not mean you can place the order using the base amount instead of the quote. If you do create_order(symbol, 'market', 'buy', 0.001, 20000) ccxt will use the required price to calculate the cost by doing 0.01*20000 and send that value to the exchange. If you want to provide the cost directly in the amount argument, you can do exchange.options['createMarketBuyOrderRequiresPrice'] = False (you acknowledge that the amount will be the cost for market-buy) and then you can do create_order(symbol, 'market', 'buy', 10) This is basically to avoid a user doing this: create_order('SHIB/USDT', market, buy, 1000000) and thinking he's trying to buy 1kk of shib but in reality he's buying 1kk USDT worth of SHIB. For that reason, by default ccxt always accepts the base currency in the amount parameter. Alternatively, you can use the functions createMarketBuyOrderWithCost/ createMarketSellOrderWithCost if they are available. See more: Market Buys What's the difference between trading spot and swap/perpetual futures? Spot trading involves buying or selling a financial instrument (like a cryptocurrency) for immediate delivery. It's straightforward, involving the direct exchange of assets. Swap trading, on the other hand, involves derivative contracts where two parties exchange financial instruments or cash flows at a set date in the future, based on the underlying asset. Swaps are often used for leverage, speculation, or hedging and do not necessarily involve the exchange of the underlying asset until the contract expires. Besides that, you will be handling contracts if you're trading swaps and not the base currency (e.g., BTC) directly, so if you create an order with amount = 1, the amount in BTC will vary depending on the contractSize. You can check the contract size by doing: await exchange.loadMarkets() symbol = 'XRP/USDT:USDT' market = exchange.market(symbol) print(market['contractSize']) How to place a reduceOnly order? A reduceOnly order is a type of order that can only reduce a position, not increase it. To place a reduceOnly order, you typically use the createOrder method with a reduceOnly parameter set to true. This ensures that the order will only execute if it decreases the size of an open position, and it will either partially fill or not fill at all if executing it would increase the position size. Javascript const params = { 'reduceOnly': true, // set to true if you want to close a position, set to false if you want to open a new position } const order = await exchange.createOrder (symbol, type, side, amount, price, params) Python params = { 'reduceOnly': True, # set to True if you want to close a position, set to False if you want to open a new position } order = exchange.create_order (symbol, type, side, amount, price, params) PHP $params = { 'reduceOnly': true, // set to true if you want to close a position, set to false if you want to open a new position } $order = $exchange->create_order ($symbol, $type, $side, $amount, $price, $params); See more: Trailing Orders How to check the endpoint used by the unified method? To check the endpoint used by a unified method in the CCXT library, you would typically need to refer to the source code of the library for the specific exchange implementation you're interested in. The unified methods in CCXT abstract away the details of the specific endpoints they interact with, so this information is not directly exposed via the library's API. For detailed inspection, you can look at the implementation of the method for the particular exchange in the CCXT library's source code on GitHub. See more: Unified API How to differentiate between previousFundingRate, fundingRate and nextFundingRate in the funding rate structure? The funding rate structure has three different funding rate values that can be returned: previousFundingRaterefers to the most recently completed rate. fundingRate is the upcoming rate. This value is always changing until the funding time passes and then it becomes the previousFundingRate. nextFundingRate is only supported on a few exchanges and is the predicted funding rate after the upcoming rate. This value is two funding rates from now. As an example, say it is 12:30. The previousFundingRate happened at 12:00 and we're looking to see what the upcoming funding rate will be by checking the fundingRate value. In this example, given 4-hour intervals, the fundingRate will happen in the future at 4:00 and the nextFundingRate is the predicted rate that will happen at 8:00.
+
+```
+python tutorials
+```
+
+**Pattern 2:** To create a market-buy order with cost, first, you need to check if the exchange supports that feature (exchange.has['createMarketBuyOrderWithCost']). If it does, then you can use the createMarketBuyOrderWithCost` method. Example:
+
+```
+exchange.has['createMarketBuyOrderWithCost']). If it does, then you can use the
+```
+
+**Pattern 3:** Example: If you wanted to buy BTC/USDT with a market buy-order, you would need to provide an amount = 5 USDT instead of 0.000X. We have a check to prevent errors that explicitly require the price because users will usually provide the amount in the base currency.
+
+```
+create_order(symbol, 'market,' 'buy,' 10)
+```
+
+**Pattern 4:** For a complete list of all exchanges and their supported methods, please, refer to this example: https://github.com/ccxt/ccxt/blob/master/examples/js/exchange-capabilities.js
+
+```
+exchange.rateLimit
+```
+
+**Pattern 5:** The ccxt library supports asynchronous concurrency mode in Python 3.5+ with async/await syntax. The asynchronous Python version uses pure asyncio with aiohttp. In async mode you have all the same properties and methods, but most methods are decorated with an async keyword. If you want to use async mode, you should link against the ccxt.async_support subpackage, like in the following example:
+
+```
+ccxt.async_support
+```
+
+## Reference Files
+
+This skill includes comprehensive documentation in `references/`:
+
+- **cli.md** - Cli documentation
+- **exchanges.md** - Exchanges documentation
+- **faq.md** - Faq documentation
+- **getting_started.md** - Getting Started documentation
+- **manual.md** - Manual documentation
+- **other.md** - Other documentation
+- **pro.md** - Pro documentation
+- **specification.md** - Specification documentation
+
+Use `view` to read specific reference files when detailed information is needed.
+
+## Working with This Skill
+
+### For Beginners
+Start with the getting_started or tutorials reference files for foundational concepts.
+
+### For Specific Features
+Use the appropriate category reference file (api, guides, etc.) for detailed information.
+
+### For Code Examples
+The quick reference section above contains common patterns extracted from the official docs.
+
+## Resources
+
+### references/
+Organized documentation extracted from official sources. These files contain:
+- Detailed explanations
+- Code examples with language annotations
+- Links to original documentation
+- Table of contents for quick navigation
+
+### scripts/
+Add helper scripts here for common automation tasks.
+
+### assets/
+Add templates, boilerplate, or example projects here.
+
+## Notes
+
+- This skill was automatically generated from official documentation
+- Reference files preserve the structure and examples from source docs
+- Code examples include language detection for better syntax highlighting
+- Quick reference patterns are extracted from common usage examples in the docs
+
+## Updating
+
+To refresh this skill with updated documentation:
+1. Re-run the scraper with the same configuration
+2. The skill will be rebuilt with the latest information
diff --git a/skills/ccxt/references/cli.md b/skills/ccxt/references/cli.md
new file mode 100644
index 0000000..7f21cb0
--- /dev/null
+++ b/skills/ccxt/references/cli.md
@@ -0,0 +1,69 @@
+# Ccxt - Cli
+
+**Pages:** 1
+
+---
+
+## Search code, repositories, users, issues, pull requests...
+
+**URL:** https://github.com/ccxt/ccxt/wiki/CLI
+
+**Contents:**
+- CCXT CLI (Command-Line Interface)
+- Install globally
+- Install
+- Usage
+ - Inspecting Exchange Properties
+ - Calling A Unified Method By Name
+ - Calling An Exchange-Specific Method By Name
+- Authentication And Overrides
+- Unified API vs Exchange-Specific API
+ - Run with jq
+
+CCXT includes an example that allows calling all exchange methods and properties from command line. One doesn't even have to be a programmer or write code – any user can use it!
+
+The CLI interface is a program in CCXT that takes the exchange name and some params from the command line and executes a corresponding call from CCXT printing the output of the call back to the user. Thus, with CLI you can use CCXT out of the box, not a single line of code needed.
+
+CCXT command line interface is very handy and useful for:
+
+For the CCXT library users – we highly recommend to try CLI at least a few times to get a feel of it. For the CCXT library developers – CLI is more than just a recommendation, it's a must.
+
+The best way to learn and understand CCXT CLI – is by experimentation, trial and error. Warning: CLI executes your command and does not ask for a confirmation after you launch it, so be careful with numbers, confusing amounts with prices can cause a loss of funds.
+
+The same CLI design is implemented in all supported languages, TypeScript, JavaScript, Python and PHP – for the purposes of example code for the developers. In other words, the existing CLI contains three implementations that are in many ways identical. The code in those three CLI examples is intended to be "easily understandable".
+
+The source code of the CLI is available here:
+
+Clone the CCXT repository:
+
+Change directory to the cloned repository:
+
+Install the dependencies:
+
+The CLI script requires at least one argument, that is, the exchange id (the list of supported exchanges and their ids). If you don't specify the exchange id, the script will print the list of all exchange ids for reference.
+
+Upon launch, CLI will create and initialize the exchange instance and will also call exchange.loadMarkets() on that exchange. If you don't specify any other command-line arguments to CLI except the exchange id argument, then the CLI script will print out all the contents of the exchange object, including the list of all the methods and properties and all the loaded markets (the output may be extremely long in that case).
+
+Normally, following the exchange id argument one would specify a method name to call with its arguments or an exchange property to inspect on the exchange instance.
+
+If the only parameter you specify to CLI is the exchange id, then it will print out the contents of the exchange instance including all properties, methods, markets, currencies, etc. Warning: exchange contents are HUGE and this will dump A LOT of output to your screen!
+
+You can specify the name of the property of the exchange to narrow the output down to a reasonable size.
+
+You can easily view which methods are supported on the various exchanges:
+
+Calling unified methods is easy:
+
+Exchange specific parameters can be set in the last argument of every unified method:
+
+Here's an example of fetching the order book on okx in sandbox mode using the implicit API and the exchange specific instId and sz parameters:
+
+Public exchange APIs don't require authentication. You can use the CLI to call any method of a public API. The difference between public APIs and private APIs is described in the Manual, here: Public/Private API.
+
+For private API calls, by default the CLI script will look for API keys in the keys.local.json file in the root of the repository cloned to your working directory and will also look up exchange credentials in the environment variables. More details here: Adding Exchange Credentials.
+
+CLI supports all possible methods and properties that exist on the exchange instance.
+
+(If the page is not being rendered for you, you can refer to the mirror at https://docs.ccxt.com/)
+
+---
diff --git a/skills/ccxt/references/exchanges.md b/skills/ccxt/references/exchanges.md
new file mode 100644
index 0000000..eb6f569
--- /dev/null
+++ b/skills/ccxt/references/exchanges.md
@@ -0,0 +1,29 @@
+# Ccxt - Exchanges
+
+**Pages:** 2
+
+---
+
+## Search code, repositories, users, issues, pull requests...
+
+**URL:** https://github.com/ccxt/ccxt/wiki/Exchange-Markets
+
+**Contents:**
+- Supported Exchanges
+
+(If the page is not being rendered for you, you can refer to the mirror at https://docs.ccxt.com/)
+
+---
+
+## Search code, repositories, users, issues, pull requests...
+
+**URL:** https://github.com/ccxt/ccxt/wiki/Exchange-Markets-By-Country
+
+**Contents:**
+- Exchanges By Country
+
+The ccxt library currently supports the following cryptocurrency exchange markets and trading APIs:
+
+(If the page is not being rendered for you, you can refer to the mirror at https://docs.ccxt.com/)
+
+---
diff --git a/skills/ccxt/references/faq.md b/skills/ccxt/references/faq.md
new file mode 100644
index 0000000..ba5c45d
--- /dev/null
+++ b/skills/ccxt/references/faq.md
@@ -0,0 +1,111 @@
+# Ccxt - Faq
+
+**Pages:** 1
+
+---
+
+## Search code, repositories, users, issues, pull requests...
+
+**URL:** https://github.com/ccxt/ccxt/wiki/FAQ
+
+**Contents:**
+- Frequently Asked Questions
+- I'm trying to run the code, but it's not working, how do I fix it?
+- What is required to get help?
+- I am calling a method and I get an error, what am I doing wrong?
+- I got an incorrect result from a method call, can you help?
+- Can you implement feature foo in exchange bar?
+- When will you add feature foo for exchange bar ? What's the estimated time? When should we expect this?
+- When will you add the support for an exchange requested in the Issues?
+- How long should I wait for a feature to be added? I need to decide whether to implement it myself or to wait for the CCXT Dev Team to implement it for me.
+- What's your progress on adding the feature foo that was requested earlier? How do you do implementing exchange bar?
+
+If your question is formulated in a short manner like the above, we won't help. We don't teach programming. If you're unable to read and understand the Manual or you can't follow precisely the guides from the CONTRIBUTING doc on how to report an issue, we won't help either. Read the CONTRIBUTING guides on how to report an issue and read the Manual. You should not risk anyone's money and time without reading the entire Manual very carefully. You should not risk anything if you're not used to a lot of reading with tons of details. Also, if you don't have the confidence with the programming language you're using, there are much better places for coding fundamentals and practice. Search for python tutorials, js videos, play with examples, this is how other people climb up the learning curve. No shortcuts, if you want to learn something.
+
+When asking a question:
+
+Use the search button for duplicates first!
+
+Post your request and response in verbose mode! Add exchange.verbose = true right before the line you're having issues with, and copypaste what you see on your screen. It's written and mentioned everywhere, in the Troubleshooting section, in the README and in many answers to similar questions among previous issues and pull requests. No excuses. The verbose output should include both the request and response from the exchange.
+
+Include the full error callstack!
+
+Write your programming language and language version number
+
+Write the CCXT / CCXT Pro library version number
+
+Which method you're trying to call
+
+Post your code to reproduce the problem. Make it a complete short runnable program, don't swallow the lines and make it as compact as you can (5-10 lines of code), including the exchange instantation code. Remove all irrelevant parts from it, leaving just the essence of the code to reproduce the issue.
+
+DO NOT POST YOUR apiKey AND secret! Keep them safe (remove them before posting)!
+
+You're not reporting the issue properly ) Please, help the community to help you ) Read this and follow the steps: https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#how-to-submit-an-issue. Once again, your code to reproduce the issue and your verbose request and response ARE REQUIRED. Just the error traceback, or just the response, or just the request, or just the code – is not enough!
+
+Basically the same answer as the previous question. Read and follow precisely: https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#how-to-submit-an-issue. Once again, your code to reproduce the issue and your verbose request and response ARE REQUIRED. Just the error traceback, or just the response, or just the request, or just the code – is not enough!
+
+Yes, we can. And we will, if nobody else does that before us. There's very little point in asking this type of questions, because the answer is always positive. When someone asks if we can do this or that, the question is not about our abilities, it all boils down to time and management needed for implementing all accumulated feature requests.
+
+Moreover, this is an open-source library which is a work in progress. This means, that this project is intended to be developed by the community of users, who are using it. What you're asking is not whether we can or cannot implement it, in fact you're actually telling us to go do that particular task and this is not how we see a voluntary collaboration. Your contributions, PRs and commits are welcome: https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#how-to-contribute-code.
+
+We don't give promises or estimates on the free open-source work. If you wish to speed it up, feel free to reach out to us via info@ccxt.trade.
+
+We don't give promises or estimates on the open-source work. The reasoning behind this is explained in the previous paragraph.
+
+Again, we can't promise on the dates for adding this or that exchange, due to reasons outlined above. The answer will always remain the same: as soon as we can.
+
+Please, go for implemeting it yourself, do not wait for us. We will add it as soon as we can. Also, your contributions are very welcome:
+
+This type of questions is usually a waste of time, because answering it usually requires too much time for context-switching, and it often takes more time to answer this question, than to actually satisfy the request with code for a new feature or a new exchange. The progress of this open-source project is also open, so, whenever you're wondering how it is doing, take a look into commit history.
+
+If it is not merged, it means that the PR contains errors, that should be fixed first. If it could be merged as is – we would merge it, and you wouldn't have asked this question in the first place. The most frequent reason for not merging a PR is a violation of any of the CONTRIBUTING guidelines. Those guidelines should be taken literally, cannot skip a single line or word from there if you want your PR to be merged quickly. Code contributions that do not break the guidelines get merged almost immediately (usually, within hours).
+
+Unfortunately, we don't always have the time to quickly list out each and every single error in the code that prevents it from merging. It is often easier and faster to just go and fix the error rather than explain what one should do to fix it. Most of them are already outlined in the CONTRIBUTING guidelines. The main rule of thumb is to follow all guidelines literally.
+
+Our build system generates exchange-specific JavaScript, Python and PHP code for us automatically, so it is transpiled from TypeScript, and there's no need to fix all languages separately one by one.
+
+Thus, if it is fixed in TypeScript, it is fixed in JavaScript NPM, Python pip and PHP Composer as well. The automatic build usually takes 15-20 minutes. Just upgrade your version with npm, pip or composer after the new version arrives and you'll be fine.
+
+Some exchanges support createOrder with the additional "attached" stopLoss & takeProfit sub-orders - view StopLoss And TakeProfit Orders Attached To A Position. However, some exchanges might not support that feature and you will need to run separate createOrder methods to add conditional order (e.g. *trigger order | stoploss order | takeprofit order) to the already open position - view [Conditional orders](Manual.md#Conditional Orders). You can also check them by looking at exchange.has['createOrderWithTakeProfitAndStopLoss'], exchange.has['createStopLossOrder'] and exchange.has['createTakeProfitOrder'], however they are not as precise as .features property.
+
+To create a market-buy order with cost, first, you need to check if the exchange supports that feature (exchange.has['createMarketBuyOrderWithCost']). If it does, then you can use the createMarketBuyOrderWithCost` method. Example:
+
+Many exchanges require the amount to be in the quote currency (they don't accept the base amount) when placing spot-market buy orders. In those cases, the exchange will have the option createMarketBuyRequiresPrice set to true.
+
+Example: If you wanted to buy BTC/USDT with a market buy-order, you would need to provide an amount = 5 USDT instead of 0.000X. We have a check to prevent errors that explicitly require the price because users will usually provide the amount in the base currency.
+
+So by default, if you do, create_order(symbol, 'market,' 'buy,' 10) will throw an error if the exchange has that option (createOrder() requires the price argument for market buy orders to calculate the total cost to spend (amount * price), alternatively set the createMarketBuyOrderRequiresPrice option or param to false...).
+
+If the exchange requires the cost and the user provided the base amount, we need to request an extra parameter price and multiply them to get the cost. If you're aware of this behavior, you can simply disable createMarketBuyOrderRequiresPrice and pass the cost in the amount parameter, but disabling it does not mean you can place the order using the base amount instead of the quote.
+
+If you do create_order(symbol, 'market', 'buy', 0.001, 20000) ccxt will use the required price to calculate the cost by doing 0.01*20000 and send that value to the exchange.
+
+If you want to provide the cost directly in the amount argument, you can do exchange.options['createMarketBuyOrderRequiresPrice'] = False (you acknowledge that the amount will be the cost for market-buy) and then you can do create_order(symbol, 'market', 'buy', 10)
+
+This is basically to avoid a user doing this: create_order('SHIB/USDT', market, buy, 1000000) and thinking he's trying to buy 1kk of shib but in reality he's buying 1kk USDT worth of SHIB. For that reason, by default ccxt always accepts the base currency in the amount parameter.
+
+Alternatively, you can use the functions createMarketBuyOrderWithCost/ createMarketSellOrderWithCost if they are available.
+
+See more: Market Buys
+
+Spot trading involves buying or selling a financial instrument (like a cryptocurrency) for immediate delivery. It's straightforward, involving the direct exchange of assets.
+
+Swap trading, on the other hand, involves derivative contracts where two parties exchange financial instruments or cash flows at a set date in the future, based on the underlying asset. Swaps are often used for leverage, speculation, or hedging and do not necessarily involve the exchange of the underlying asset until the contract expires.
+
+Besides that, you will be handling contracts if you're trading swaps and not the base currency (e.g., BTC) directly, so if you create an order with amount = 1, the amount in BTC will vary depending on the contractSize. You can check the contract size by doing:
+
+A reduceOnly order is a type of order that can only reduce a position, not increase it. To place a reduceOnly order, you typically use the createOrder method with a reduceOnly parameter set to true. This ensures that the order will only execute if it decreases the size of an open position, and it will either partially fill or not fill at all if executing it would increase the position size.
+
+See more: Trailing Orders
+
+To check the endpoint used by a unified method in the CCXT library, you would typically need to refer to the source code of the library for the specific exchange implementation you're interested in. The unified methods in CCXT abstract away the details of the specific endpoints they interact with, so this information is not directly exposed via the library's API. For detailed inspection, you can look at the implementation of the method for the particular exchange in the CCXT library's source code on GitHub.
+
+See more: Unified API
+
+The funding rate structure has three different funding rate values that can be returned:
+
+As an example, say it is 12:30. The previousFundingRate happened at 12:00 and we're looking to see what the upcoming funding rate will be by checking the fundingRate value. In this example, given 4-hour intervals, the fundingRate will happen in the future at 4:00 and the nextFundingRate is the predicted rate that will happen at 8:00.
+
+(If the page is not being rendered for you, you can refer to the mirror at https://docs.ccxt.com/)
+
+---
diff --git a/skills/ccxt/references/getting_started.md b/skills/ccxt/references/getting_started.md
new file mode 100644
index 0000000..cb13457
--- /dev/null
+++ b/skills/ccxt/references/getting_started.md
@@ -0,0 +1,72 @@
+# Ccxt - Getting Started
+
+**Pages:** 1
+
+---
+
+## Search code, repositories, users, issues, pull requests...
+
+**URL:** https://github.com/ccxt/ccxt/wiki/Install
+
+**Contents:**
+- Install
+ - JavaScript (NPM)
+ - JavaScript (for use with the
+```
+
+### CDN (UMD)
+```html
+
+```
+
+## Quick Start Examples
+
+### Basic Reusable Capture
+```javascript
+// Create reusable capture object
+const result = await snapdom(document.querySelector('#target'));
+
+// Export to different formats
+const png = await result.toPng();
+const jpg = await result.toJpg();
+const svg = await result.toSvg();
+const canvas = await result.toCanvas();
+const blob = await result.toBlob();
+
+// Use the result
+document.body.appendChild(png);
+```
+
+### One-Step Export
+```javascript
+// Direct export without intermediate object
+const png = await snapdom.toPng(document.querySelector('#target'));
+const svg = await snapdom.toSvg(element);
+```
+
+### Download Element
+```javascript
+// Automatically download as file
+await snapdom.download(element, 'screenshot.png');
+await snapdom.download(element, 'image.svg');
+```
+
+### With Options
+```javascript
+const result = await snapdom(element, {
+ scale: 2, // 2x resolution
+ width: 800, // Custom width
+ height: 600, // Custom height
+ embedFonts: true, // Include @font-face
+ exclude: '.no-capture', // Hide elements
+ useProxy: true, // Enable CORS proxy
+ straighten: true, // Remove transforms
+ noShadows: false // Keep shadows
+});
+
+const png = await result.toPng({ quality: 0.95 });
+```
+
+## Essential Options Reference
+
+| Option | Type | Purpose |
+|--------|------|---------|
+| `scale` | Number | Scale output (e.g., 2 for 2x resolution) |
+| `width` | Number | Custom output width in pixels |
+| `height` | Number | Custom output height in pixels |
+| `embedFonts` | Boolean | Include non-icon @font-face rules |
+| `useProxy` | String\|Boolean | Enable CORS proxy (URL or true for default) |
+| `exclude` | String | CSS selector for elements to hide |
+| `straighten` | Boolean | Remove translate/rotate transforms |
+| `noShadows` | Boolean | Strip shadow effects |
+
+## Common Patterns
+
+### Responsive Screenshots
+```javascript
+// Capture at different scales
+const mobile = await snapdom.toPng(element, { scale: 1 });
+const tablet = await snapdom.toPng(element, { scale: 1.5 });
+const desktop = await snapdom.toPng(element, { scale: 2 });
+```
+
+### Exclude Elements
+```javascript
+// Hide specific elements from capture
+const png = await snapdom.toPng(element, {
+ exclude: '.controls, .watermark, [data-no-capture]'
+});
+```
+
+### Fixed Dimensions
+```javascript
+// Capture with specific size
+const result = await snapdom(element, {
+ width: 1200,
+ height: 630 // Standard social media size
+});
+```
+
+### CORS Handling
+```javascript
+// Fallback for CORS-blocked resources
+const png = await snapdom.toPng(element, {
+ useProxy: 'https://cors.example.com/?' // Custom proxy
+});
+```
+
+### Plugin System (Beta)
+```javascript
+// Extend with custom exporters
+snapdom.plugins([pluginFactory, { colorOverlay: true }]);
+
+// Hook into lifecycle
+defineExports(context) {
+ return {
+ pdf: async (ctx, opts) => { /* generate PDF */ }
+ };
+}
+
+// Lifecycle hooks available:
+// beforeSnap → beforeClone → afterClone →
+// beforeRender → beforeExport → afterExport
+```
+
+## Performance Comparison
+
+SnapDOM significantly outperforms html2canvas:
+
+| Scenario | SnapDOM | html2canvas | Improvement |
+|----------|---------|-------------|-------------|
+| Small (200×100) | 1.6ms | 68ms | 42x faster |
+| Medium (800×600) | 12ms | 280ms | 23x faster |
+| Large (4000×2000) | 171ms | 1,800ms | 10x faster |
+
+## Development
+
+### Setup
+```bash
+git clone https://github.com/zumerlab/snapdom.git
+cd snapdom
+npm install
+```
+
+### Build
+```bash
+npm run compile
+```
+
+### Testing
+```bash
+npm test
+```
+
+## Browser Support
+
+- Chrome/Edge 90+
+- Firefox 88+
+- Safari 14+
+- Mobile browsers (iOS Safari 14+, Chrome Mobile)
+
+## Resources
+
+### Documentation
+- **Official Website:** https://snapdom.dev/
+- **GitHub Repository:** https://github.com/zumerlab/snapdom
+- **NPM Package:** https://www.npmjs.com/package/@zumer/snapdom
+- **License:** MIT
+
+### scripts/
+Add helper scripts here for automation, e.g.:
+- `batch-screenshot.js` - Capture multiple elements
+- `pdf-export.js` - Convert snapshots to PDF
+- `compare-outputs.js` - Compare SVG vs PNG quality
+
+### assets/
+Add templates and examples:
+- HTML templates for common capture scenarios
+- CSS frameworks pre-configured with snapdom
+- Boilerplate projects integrating snapdom
+
+## Related Tools
+
+- **html2canvas** - Alternative DOM capture (slower but more compatible)
+- **Orbit CSS Toolkit** - Companion toolkit by Zumerlab (https://github.com/zumerlab/orbit)
+
+## Tips & Best Practices
+
+1. **Performance**: Use `scale` instead of `width`/`height` for better performance
+2. **Fonts**: Set `embedFonts: true` to ensure custom fonts appear correctly
+3. **CORS Issues**: Use `useProxy: true` if images fail to load
+4. **Large Elements**: Break into smaller chunks for complex pages
+5. **Quality**: For PNG/JPG, use `quality: 0.95` for best quality
+6. **SVG Vectors**: Prefer SVG export for charts and graphics
+
+## Troubleshooting
+
+### Elements Not Rendering
+- Check if element has sufficient height/width
+- Verify CSS is fully loaded before capture
+- Try `straighten: false` if transforms are causing issues
+
+### Missing Fonts
+- Set `embedFonts: true`
+- Ensure fonts are loaded before calling snapdom
+- Check browser console for font loading errors
+
+### CORS Issues
+- Enable `useProxy: true`
+- Use custom proxy URL if default fails
+- Check if resources are from same origin
+
+### Performance Issues
+- Reduce `scale` value
+- Use `noShadows: true` to skip shadow rendering
+- Consider splitting large captures into smaller sections
diff --git a/skills/snapdom/references/index.md b/skills/snapdom/references/index.md
new file mode 100644
index 0000000..152e883
--- /dev/null
+++ b/skills/snapdom/references/index.md
@@ -0,0 +1,7 @@
+# Snapdom Documentation Index
+
+## Categories
+
+### Other
+**File:** `other.md`
+**Pages:** 1
diff --git a/skills/snapdom/references/other.md b/skills/snapdom/references/other.md
new file mode 100644
index 0000000..d97b8fd
--- /dev/null
+++ b/skills/snapdom/references/other.md
@@ -0,0 +1,53 @@
+# Snapdom - Other
+
+**Pages:** 1
+
+---
+
+## snapDOM – HTML to Image capture with superior accuracy and speed - Now with Plugins!
+
+**URL:** https://snapdom.dev/
+
+**Contents:**
+- 🏁 Benchmark: snapDOM vs html2canvas
+- 📦 Basic
+ - Hello SnapDOM!
+- Transforms & Shadows
+- 🅰️ ASCII Plugin
+- 🕒 Timestamp Plugin
+- 🚀 Fun Transition
+- Orbit CSS toolkit - Go to repo
+- 🔤 Google Fonts
+ - Unique Typography!
+
+Each library will capture the same DOM element to canvas 5 times. We'll calculate average speed and show the winner.
+
+Capture it just with outerTransforms / outerShadows.
+
+I'm dancing and changing color!
+
+Google Fonts with embedFonts: true.
+
+**Examples:**
+
+Example 1 (unknown):
+```unknown
+outerTransforms
+```
+
+Example 2 (unknown):
+```unknown
+outerShadows
+```
+
+Example 3 (unknown):
+```unknown
+outerTransforms
+```
+
+Example 4 (unknown):
+```unknown
+outerShadows
+```
+
+---
diff --git a/skills/telegram-dev/SKILL.md b/skills/telegram-dev/SKILL.md
new file mode 100644
index 0000000..d064914
--- /dev/null
+++ b/skills/telegram-dev/SKILL.md
@@ -0,0 +1,760 @@
+---
+name: telegram-dev
+description: Telegram 生态开发全栈指南 - 涵盖 Bot API、Mini Apps (Web Apps)、MTProto 客户端开发。包括消息处理、支付、内联模式、Webhook、认证、存储、传感器 API 等完整开发资源。
+---
+
+# Telegram 生态开发技能
+
+全面的 Telegram 开发指南,涵盖 Bot 开发、Mini Apps (Web Apps)、客户端开发的完整技术栈。
+
+## 何时使用此技能
+
+当需要以下帮助时使用此技能:
+- 开发 Telegram Bot(消息机器人)
+- 创建 Telegram Mini Apps(小程序)
+- 构建自定义 Telegram 客户端
+- 集成 Telegram 支付和业务功能
+- 实现 Webhook 和长轮询
+- 使用 Telegram 认证和存储
+- 处理消息、媒体和文件
+- 实现内联模式和键盘
+
+## Telegram 开发生态概览
+
+### 三大核心 API
+
+1. **Bot API** - 创建机器人程序
+ - HTTP 接口,简单易用
+ - 自动处理加密和通信
+ - 适合:聊天机器人、自动化工具
+
+2. **Mini Apps API** (Web Apps) - 创建 Web 应用
+ - JavaScript 接口
+ - 在 Telegram 内运行
+ - 适合:小程序、游戏、电商
+
+3. **Telegram API & TDLib** - 创建客户端
+ - 完整的 Telegram 协议实现
+ - 支持所有平台
+ - 适合:自定义客户端、企业应用
+
+## Bot API 开发
+
+### 快速开始
+
+**API 端点:**
+```
+https://api.telegram.org/bot/METHOD_NAME
+```
+
+**获取 Bot Token:**
+1. 与 @BotFather 对话
+2. 发送 `/newbot`
+3. 按提示设置名称
+4. 获取 token
+
+**第一个 Bot (Python):**
+```python
+import requests
+
+BOT_TOKEN = "your_bot_token_here"
+API_URL = f"https://api.telegram.org/bot{BOT_TOKEN}"
+
+# 发送消息
+def send_message(chat_id, text):
+ url = f"{API_URL}/sendMessage"
+ data = {"chat_id": chat_id, "text": text}
+ return requests.post(url, json=data)
+
+# 获取更新(长轮询)
+def get_updates(offset=None):
+ url = f"{API_URL}/getUpdates"
+ params = {"offset": offset, "timeout": 30}
+ return requests.get(url, params=params).json()
+
+# 主循环
+offset = None
+while True:
+ updates = get_updates(offset)
+ for update in updates.get("result", []):
+ chat_id = update["message"]["chat"]["id"]
+ text = update["message"]["text"]
+
+ # 回复消息
+ send_message(chat_id, f"你说了:{text}")
+
+ offset = update["update_id"] + 1
+```
+
+### 核心 API 方法
+
+**更新管理:**
+- `getUpdates` - 长轮询获取更新
+- `setWebhook` - 设置 Webhook
+- `deleteWebhook` - 删除 Webhook
+- `getWebhookInfo` - 查询 Webhook 状态
+
+**消息操作:**
+- `sendMessage` - 发送文本消息
+- `sendPhoto` / `sendVideo` / `sendDocument` - 发送媒体
+- `sendAudio` / `sendVoice` - 发送音频
+- `sendLocation` / `sendVenue` - 发送位置
+- `editMessageText` - 编辑消息
+- `deleteMessage` - 删除消息
+- `forwardMessage` / `copyMessage` - 转发/复制消息
+
+**交互元素:**
+- `sendPoll` - 发送投票(最多 12 个选项)
+- 内联键盘 (InlineKeyboardMarkup)
+- 回复键盘 (ReplyKeyboardMarkup)
+- `answerCallbackQuery` - 响应回调查询
+
+**文件操作:**
+- `getFile` - 获取文件信息
+- `downloadFile` - 下载文件
+- 支持最大 2GB 文件(本地 Bot API 模式)
+
+**支付功能:**
+- `sendInvoice` - 发送发票
+- `answerPreCheckoutQuery` - 处理支付
+- Telegram Stars 支付(最高 10,000 Stars)
+
+### Webhook 配置
+
+**设置 Webhook:**
+```python
+import requests
+
+BOT_TOKEN = "your_token"
+WEBHOOK_URL = "https://yourdomain.com/webhook"
+
+requests.post(
+ f"https://api.telegram.org/bot{BOT_TOKEN}/setWebhook",
+ json={"url": WEBHOOK_URL}
+)
+```
+
+**Flask Webhook 示例:**
+```python
+from flask import Flask, request
+import requests
+
+app = Flask(__name__)
+BOT_TOKEN = "your_token"
+
+@app.route('/webhook', methods=['POST'])
+def webhook():
+ update = request.get_json()
+
+ chat_id = update["message"]["chat"]["id"]
+ text = update["message"]["text"]
+
+ # 发送回复
+ requests.post(
+ f"https://api.telegram.org/bot{BOT_TOKEN}/sendMessage",
+ json={"chat_id": chat_id, "text": f"收到: {text}"}
+ )
+
+ return "OK"
+
+if __name__ == '__main__':
+ app.run(port=5000)
+```
+
+**Webhook 要求:**
+- 必须使用 HTTPS
+- 支持 TLS 1.2+
+- 端口:443, 80, 88, 8443
+- 公共可访问的 URL
+
+### 内联键盘
+
+**创建内联键盘:**
+```python
+def send_inline_keyboard(chat_id):
+ keyboard = {
+ "inline_keyboard": [
+ [
+ {"text": "按钮 1", "callback_data": "btn1"},
+ {"text": "按钮 2", "callback_data": "btn2"}
+ ],
+ [
+ {"text": "打开链接", "url": "https://example.com"}
+ ]
+ ]
+ }
+
+ requests.post(
+ f"{API_URL}/sendMessage",
+ json={
+ "chat_id": chat_id,
+ "text": "选择一个选项:",
+ "reply_markup": keyboard
+ }
+ )
+```
+
+**处理回调:**
+```python
+def handle_callback_query(callback_query):
+ query_id = callback_query["id"]
+ data = callback_query["data"]
+ chat_id = callback_query["message"]["chat"]["id"]
+
+ # 响应回调
+ requests.post(
+ f"{API_URL}/answerCallbackQuery",
+ json={"callback_query_id": query_id, "text": f"你点击了 {data}"}
+ )
+
+ # 更新消息
+ requests.post(
+ f"{API_URL}/editMessageText",
+ json={
+ "chat_id": chat_id,
+ "message_id": callback_query["message"]["message_id"],
+ "text": f"你选择了:{data}"
+ }
+ )
+```
+
+### 内联模式
+
+**配置内联模式:**
+与 @BotFather 对话,发送 `/setinline`
+
+**处理内联查询:**
+```python
+def handle_inline_query(inline_query):
+ query_id = inline_query["id"]
+ query_text = inline_query["query"]
+
+ # 创建结果
+ results = [
+ {
+ "type": "article",
+ "id": "1",
+ "title": "结果 1",
+ "input_message_content": {
+ "message_text": f"你搜索了:{query_text}"
+ }
+ }
+ ]
+
+ requests.post(
+ f"{API_URL}/answerInlineQuery",
+ json={"inline_query_id": query_id, "results": results}
+ )
+```
+
+## Mini Apps (Web Apps) 开发
+
+### 初始化 Mini App
+
+**HTML 模板:**
+```html
+
+
+
+
+
+
+ My Mini App
+
+
+
Telegram Mini App
+
+
+
+
+
+```
+
+### Mini App 核心 API
+
+**WebApp 对象主要属性:**
+```javascript
+// 初始化数据
+tg.initData // 原始初始化字符串
+tg.initDataUnsafe // 解析后的对象
+
+// 用户和主题
+tg.initDataUnsafe.user // 用户信息
+tg.themeParams // 主题颜色
+tg.colorScheme // 'light' 或 'dark'
+
+// 状态
+tg.isExpanded // 是否全屏
+tg.isFullscreen // 是否全屏
+tg.viewportHeight // 视口高度
+tg.platform // 平台类型
+
+// 版本
+tg.version // WebApp 版本
+```
+
+**主要方法:**
+```javascript
+// 窗口控制
+tg.ready() // 标记应用准备就绪
+tg.expand() // 展开到全高度
+tg.close() // 关闭 Mini App
+tg.requestFullscreen() // 请求全屏
+
+// 数据发送
+tg.sendData(data) // 发送数据到 Bot
+
+// 导航
+tg.openLink(url) // 打开外部链接
+tg.openTelegramLink(url) // 打开 Telegram 链接
+
+// 对话框
+tg.showPopup(params, callback) // 显示弹窗
+tg.showAlert(message) // 显示警告
+tg.showConfirm(message) // 显示确认
+
+// 分享
+tg.shareMessage(message) // 分享消息
+tg.shareUrl(url) // 分享链接
+```
+
+### UI 控件
+
+**主按钮 (MainButton):**
+```javascript
+tg.MainButton.setText("点击我");
+tg.MainButton.show();
+tg.MainButton.enable();
+tg.MainButton.showProgress(); // 显示加载
+tg.MainButton.hideProgress();
+
+tg.MainButton.onClick(() => {
+ console.log("主按钮被点击");
+});
+```
+
+**次要按钮 (SecondaryButton):**
+```javascript
+tg.SecondaryButton.setText("取消");
+tg.SecondaryButton.show();
+tg.SecondaryButton.onClick(() => {
+ tg.close();
+});
+```
+
+**返回按钮 (BackButton):**
+```javascript
+tg.BackButton.show();
+tg.BackButton.onClick(() => {
+ // 返回逻辑
+});
+```
+
+**触觉反馈:**
+```javascript
+tg.HapticFeedback.impactOccurred('light'); // light, medium, heavy
+tg.HapticFeedback.notificationOccurred('success'); // success, warning, error
+tg.HapticFeedback.selectionChanged();
+```
+
+### 存储 API
+
+**云存储:**
+```javascript
+// 保存数据
+tg.CloudStorage.setItem('key', 'value', (error, success) => {
+ if (success) console.log('保存成功');
+});
+
+// 获取数据
+tg.CloudStorage.getItem('key', (error, value) => {
+ console.log('值:', value);
+});
+
+// 删除数据
+tg.CloudStorage.removeItem('key');
+
+// 获取所有键
+tg.CloudStorage.getKeys((error, keys) => {
+ console.log('所有键:', keys);
+});
+```
+
+**本地存储:**
+```javascript
+// 普通本地存储
+localStorage.setItem('key', 'value');
+const value = localStorage.getItem('key');
+
+// 安全存储(需要生物识别)
+tg.SecureStorage.setItem('secret', 'value', callback);
+tg.SecureStorage.getItem('secret', callback);
+```
+
+### 生物识别认证
+
+```javascript
+const bioManager = tg.BiometricManager;
+
+// 初始化
+bioManager.init(() => {
+ if (bioManager.isInited) {
+ console.log('支持的类型:', bioManager.biometricType);
+ // 'finger', 'face', 'unknown'
+
+ if (bioManager.isAccessGranted) {
+ // 已授权,可以使用
+ } else {
+ // 请求授权
+ bioManager.requestAccess({reason: '需要验证身份'}, (success) => {
+ if (success) {
+ console.log('授权成功');
+ }
+ });
+ }
+ }
+});
+
+// 执行认证
+bioManager.authenticate({reason: '确认操作'}, (success, token) => {
+ if (success) {
+ console.log('认证成功,token:', token);
+ }
+});
+```
+
+### 位置和传感器
+
+**获取位置:**
+```javascript
+tg.LocationManager.init(() => {
+ if (tg.LocationManager.isInited) {
+ tg.LocationManager.getLocation((location) => {
+ console.log('纬度:', location.latitude);
+ console.log('经度:', location.longitude);
+ });
+ }
+});
+```
+
+**加速度计:**
+```javascript
+tg.Accelerometer.start({refresh_rate: 100}, (started) => {
+ if (started) {
+ tg.Accelerometer.onEvent((event) => {
+ console.log('加速度:', event.x, event.y, event.z);
+ });
+ }
+});
+
+// 停止
+tg.Accelerometer.stop();
+```
+
+**陀螺仪:**
+```javascript
+tg.Gyroscope.start({refresh_rate: 100}, callback);
+tg.Gyroscope.onEvent((event) => {
+ console.log('旋转速度:', event.x, event.y, event.z);
+});
+```
+
+**设备方向:**
+```javascript
+tg.DeviceOrientation.start({refresh_rate: 100}, callback);
+tg.DeviceOrientation.onEvent((event) => {
+ console.log('方向:', event.absolute, event.alpha, event.beta, event.gamma);
+});
+```
+
+### 支付集成
+
+**发起支付 (Telegram Stars):**
+```javascript
+tg.openInvoice('https://t.me/$invoice_link', (status) => {
+ if (status === 'paid') {
+ console.log('支付成功');
+ } else if (status === 'cancelled') {
+ console.log('支付取消');
+ } else if (status === 'failed') {
+ console.log('支付失败');
+ }
+});
+```
+
+### 数据验证
+
+**服务器端验证 initData (Python):**
+```python
+import hmac
+import hashlib
+from urllib.parse import parse_qs
+
+def validate_init_data(init_data, bot_token):
+ # 解析数据
+ parsed = parse_qs(init_data)
+ received_hash = parsed.get('hash', [''])[0]
+
+ # 移除 hash
+ data_check_arr = []
+ for key, value in parsed.items():
+ if key != 'hash':
+ data_check_arr.append(f"{key}={value[0]}")
+
+ # 排序
+ data_check_arr.sort()
+ data_check_string = '\n'.join(data_check_arr)
+
+ # 计算密钥
+ secret_key = hmac.new(
+ b"WebAppData",
+ bot_token.encode(),
+ hashlib.sha256
+ ).digest()
+
+ # 计算哈希
+ calculated_hash = hmac.new(
+ secret_key,
+ data_check_string.encode(),
+ hashlib.sha256
+ ).hexdigest()
+
+ return calculated_hash == received_hash
+```
+
+### 启动 Mini App
+
+**从键盘按钮:**
+```python
+keyboard = {
+ "keyboard": [[
+ {
+ "text": "打开应用",
+ "web_app": {"url": "https://yourdomain.com/app"}
+ }
+ ]],
+ "resize_keyboard": True
+}
+
+requests.post(
+ f"{API_URL}/sendMessage",
+ json={
+ "chat_id": chat_id,
+ "text": "点击按钮打开应用",
+ "reply_markup": keyboard
+ }
+)
+```
+
+**从内联按钮:**
+```python
+keyboard = {
+ "inline_keyboard": [[
+ {
+ "text": "启动应用",
+ "web_app": {"url": "https://yourdomain.com/app"}
+ }
+ ]]
+}
+```
+
+**从菜单按钮:**
+与 @BotFather 对话:
+```
+/setmenubutton
+→ 选择你的 Bot
+→ 提供 URL: https://yourdomain.com/app
+```
+
+## 客户端开发 (TDLib)
+
+### 使用 TDLib
+
+**Python 示例 (python-telegram):**
+```python
+from telegram.client import Telegram
+
+tg = Telegram(
+ api_id='your_api_id',
+ api_hash='your_api_hash',
+ phone='+1234567890',
+ database_encryption_key='changeme1234',
+)
+
+tg.login()
+
+# 发送消息
+result = tg.send_message(
+ chat_id=123456789,
+ text='Hello from TDLib!'
+)
+
+# 获取聊天列表
+result = tg.get_chats()
+result.wait()
+chats = result.update
+
+print(chats)
+
+tg.stop()
+```
+
+### MTProto 协议
+
+**特点:**
+- 端到端加密
+- 高性能
+- 支持所有 Telegram 功能
+- 需要 API ID/Hash(从 https://my.telegram.org 获取)
+
+## 最佳实践
+
+### Bot 开发
+
+1. **错误处理**
+ ```python
+ try:
+ response = requests.post(url, json=data, timeout=10)
+ response.raise_for_status()
+ except requests.exceptions.RequestException as e:
+ print(f"请求失败: {e}")
+ ```
+
+2. **速率限制**
+ - 群组消息:最多 20 条/分钟
+ - 私聊消息:最多 30 条/秒
+ - 全局限制:避免过于频繁
+
+3. **使用 Webhook 而非长轮询**
+ - 更高效
+ - 更低延迟
+ - 更好的可扩展性
+
+4. **数据验证**
+ - 始终验证 initData
+ - 不要信任客户端数据
+ - 服务器端验证所有操作
+
+### Mini Apps 开发
+
+1. **响应式设计**
+ ```javascript
+ // 监听主题变化
+ tg.onEvent('themeChanged', () => {
+ document.body.style.backgroundColor = tg.themeParams.bg_color;
+ });
+
+ // 监听视口变化
+ tg.onEvent('viewportChanged', () => {
+ console.log('新高度:', tg.viewportHeight);
+ });
+ ```
+
+2. **性能优化**
+ - 最小化 JavaScript 包大小
+ - 使用懒加载
+ - 优化图片和资源
+
+3. **用户体验**
+ - 适配深色/浅色主题
+ - 使用原生 UI 控件(MainButton 等)
+ - 提供触觉反馈
+ - 快速响应用户操作
+
+4. **安全考虑**
+ - HTTPS 强制
+ - 验证 initData
+ - 不在客户端存储敏感信息
+ - 使用 SecureStorage 存储密钥
+
+## 常用库和工具
+
+### Python
+- `python-telegram-bot` - 功能强大的 Bot 框架
+- `aiogram` - 异步 Bot 框架
+- `telethon` / `pyrogram` - MTProto 客户端
+
+### Node.js
+- `node-telegram-bot-api` - Bot API 包装器
+- `telegraf` - 现代 Bot 框架
+- `grammy` - 轻量级框架
+
+### 其他语言
+- PHP: `telegram-bot-sdk`
+- Go: `telegram-bot-api`
+- Java: `TelegramBots`
+- C#: `Telegram.Bot`
+
+## 参考资源
+
+### 官方文档
+- Bot API: https://core.telegram.org/bots/api
+- Mini Apps: https://core.telegram.org/bots/webapps
+- Mini Apps Platform: https://docs.telegram-mini-apps.com
+- Telegram API: https://core.telegram.org
+
+### GitHub 仓库
+- Bot API 服务器: https://github.com/tdlib/telegram-bot-api
+- Android 客户端: https://github.com/DrKLO/Telegram
+- Desktop 客户端: https://github.com/telegramdesktop/tdesktop
+- 官方组织: https://github.com/orgs/TelegramOfficial/repositories
+
+### 工具
+- @BotFather - 创建和管理 Bot
+- https://my.telegram.org - 获取 API ID/Hash
+- Telegram Web App 测试环境
+
+## 参考文件
+
+此技能包含详细的 Telegram 开发资源索引和完整实现模板:
+
+- **index.md** - 完整的资源链接和快速导航
+- **Telegram_Bot_按钮和键盘实现模板.md** - 交互式按钮和键盘实现指南(404 行,12 KB)
+ - 三种按钮类型详解(Inline/Reply/Command Menu)
+ - python-telegram-bot 和 Telethon 双实现对比
+ - 完整的即用代码示例和项目结构
+ - Handler 系统、错误处理和部署方案
+- **动态视图对齐实现文档.md** - Telegram 数据展示指南(407 行,12 KB)
+ - 智能动态对齐算法(三步法,O(n×m) 复杂度)
+ - 等宽字体环境的完美对齐方案
+ - 智能数值格式化系统(B/M/K 自动缩写)
+ - 排行榜和数据表格专业展示
+
+这些精简指南提供了核心的 Telegram Bot 开发解决方案:
+- 按钮和键盘交互的所有实现方式
+- 消息和数据的专业格式化展示
+- 实用的最佳实践和快速参考
+
+---
+
+**使用此技能掌握 Telegram 生态的全栈开发!**
diff --git a/skills/telegram-dev/references/Telegram_Bot_按钮和键盘实现模板.md b/skills/telegram-dev/references/Telegram_Bot_按钮和键盘实现模板.md
new file mode 100644
index 0000000..ecb2587
--- /dev/null
+++ b/skills/telegram-dev/references/Telegram_Bot_按钮和键盘实现模板.md
@@ -0,0 +1,404 @@
+# Telegram Bot 按钮与键盘实现指南
+
+> 完整的 Telegram Bot 交互式功能开发参考
+
+---
+
+## 📋 目录
+
+1. [按钮和键盘类型](#按钮和键盘类型)
+2. [实现方式对比](#实现方式对比)
+3. [核心代码示例](#核心代码示例)
+4. [最佳实践](#最佳实践)
+
+---
+
+## 按钮和键盘类型
+
+### 1. Inline Keyboard(内联键盘)
+
+**特点**:
+- 显示在消息下方
+- 点击后触发回调,不发送消息
+- 支持回调数据、URL、切换查询等
+
+**应用场景**:确认/取消、菜单导航、分页控制、设置选项
+
+### 2. Reply Keyboard(底部虚拟键盘)
+
+**特点**:
+- 显示在输入框上方
+- 点击后发送文本消息
+- 可设置持久化或一次性
+
+**应用场景**:快捷命令、常用操作、表单输入、主菜单
+
+### 3. Bot Command Menu(命令菜单)
+
+**特点**:
+- 显示在输入框左侧 "/" 按钮
+- 通过 BotFather 或 API 设置
+- 提供命令列表和描述
+
+**应用场景**:功能索引、新用户引导、快速命令访问
+
+### 4. 类型对比
+
+| 特性 | Inline | Reply | Command Menu |
+|------|--------|-------|--------------|
+| 位置 | 消息下方 | 输入框上方 | "/" 菜单 |
+| 触发 | 回调查询 | 文本消息 | 命令 |
+| 持久化 | 随消息 | 可配置 | 始终存在 |
+| 场景 | 临时交互 | 常驻功能 | 命令索引 |
+
+---
+
+## 实现方式对比
+
+### python-telegram-bot(推荐 Bot 开发)
+
+**优点**:
+- 官方推荐,完整的 Handler 系统
+- 丰富的按钮和键盘支持
+- 异步版本性能优异
+
+**安装**:
+```bash
+pip install python-telegram-bot==20.7
+```
+
+### Telethon(适合用户账号自动化)
+
+**优点**:
+- 完整的 MTProto API 访问
+- 可使用用户账号和 Bot
+- 强大的消息监听能力
+
+**安装**:
+```bash
+pip install telethon cryptg
+```
+
+---
+
+## 核心代码示例
+
+### 1. Inline Keyboard 实现
+
+**python-telegram-bot:**
+```python
+from telegram import Update, InlineKeyboardButton, InlineKeyboardMarkup
+from telegram.ext import Application, CommandHandler, CallbackQueryHandler, ContextTypes
+
+async def start(update: Update, context: ContextTypes.DEFAULT_TYPE):
+ """显示内联键盘"""
+ keyboard = [
+ [
+ InlineKeyboardButton("📊 查看数据", callback_data="view_data"),
+ InlineKeyboardButton("⚙️ 设置", callback_data="settings"),
+ ],
+ [
+ InlineKeyboardButton("🔗 访问网站", url="https://example.com"),
+ ],
+ ]
+ reply_markup = InlineKeyboardMarkup(keyboard)
+ await update.message.reply_text("请选择:", reply_markup=reply_markup)
+
+async def button_callback(update: Update, context: ContextTypes.DEFAULT_TYPE):
+ """处理按钮点击"""
+ query = update.callback_query
+ await query.answer() # 必须调用
+
+ if query.data == "view_data":
+ await query.edit_message_text("显示数据...")
+ elif query.data == "settings":
+ await query.edit_message_text("设置选项...")
+
+# 注册处理器
+app = Application.builder().token("TOKEN").build()
+app.add_handler(CommandHandler("start", start))
+app.add_handler(CallbackQueryHandler(button_callback))
+app.run_polling()
+```
+
+**Telethon:**
+```python
+from telethon import TelegramClient, events, Button
+
+client = TelegramClient('bot', api_id, api_hash).start(bot_token=BOT_TOKEN)
+
+@client.on(events.NewMessage(pattern='/start'))
+async def start(event):
+ buttons = [
+ [Button.inline("📊 查看数据", b"view_data"), Button.inline("⚙️ 设置", b"settings")],
+ [Button.url("🔗 访问网站", "https://example.com")]
+ ]
+ await event.respond("请选择:", buttons=buttons)
+
+@client.on(events.CallbackQuery)
+async def callback(event):
+ if event.data == b"view_data":
+ await event.edit("显示数据...")
+ elif event.data == b"settings":
+ await event.edit("设置选项...")
+
+client.run_until_disconnected()
+```
+
+### 2. Reply Keyboard 实现
+
+**python-telegram-bot:**
+```python
+from telegram import KeyboardButton, ReplyKeyboardMarkup, ReplyKeyboardRemove
+
+async def menu(update: Update, context: ContextTypes.DEFAULT_TYPE):
+ """显示底部键盘"""
+ keyboard = [
+ [KeyboardButton("📊 查看数据"), KeyboardButton("⚙️ 设置")],
+ [KeyboardButton("📚 帮助"), KeyboardButton("❌ 隐藏键盘")],
+ ]
+ reply_markup = ReplyKeyboardMarkup(
+ keyboard,
+ resize_keyboard=True,
+ one_time_keyboard=False
+ )
+ await update.message.reply_text("菜单已激活", reply_markup=reply_markup)
+
+async def handle_text(update: Update, context: ContextTypes.DEFAULT_TYPE):
+ """处理文本消息"""
+ text = update.message.text
+ if text == "📊 查看数据":
+ await update.message.reply_text("显示数据...")
+ elif text == "❌ 隐藏键盘":
+ await update.message.reply_text("已隐藏", reply_markup=ReplyKeyboardRemove())
+```
+
+**Telethon:**
+```python
+@client.on(events.NewMessage(pattern='/menu'))
+async def menu(event):
+ buttons = [
+ [Button.text("📊 查看数据"), Button.text("⚙️ 设置")],
+ [Button.text("📚 帮助"), Button.text("❌ 隐藏键盘")]
+ ]
+ await event.respond("菜单已激活", buttons=buttons)
+
+@client.on(events.NewMessage)
+async def handle_text(event):
+ if event.text == "📊 查看数据":
+ await event.respond("显示数据...")
+```
+
+### 3. Bot Command Menu 设置
+
+**通过 BotFather:**
+```
+1. 发送 /setcommands 到 @BotFather
+2. 选择你的 Bot
+3. 输入命令列表(每行格式:command - description)
+
+start - 启动机器人
+help - 获取帮助
+menu - 显示主菜单
+settings - 配置设置
+```
+
+**通过 API(python-telegram-bot):**
+```python
+from telegram import BotCommand
+
+async def set_commands(app: Application):
+ """设置命令菜单"""
+ commands = [
+ BotCommand("start", "启动机器人"),
+ BotCommand("help", "获取帮助"),
+ BotCommand("menu", "显示主菜单"),
+ BotCommand("settings", "配置设置"),
+ ]
+ await app.bot.set_my_commands(commands)
+
+# 在启动时调用
+app.post_init = set_commands
+```
+
+### 4. 项目结构示例
+
+```
+telegram_bot/
+├── bot.py # 主程序
+├── config.py # 配置管理
+├── requirements.txt
+├── .env
+├── handlers/
+│ ├── command_handlers.py # 命令处理器
+│ ├── callback_handlers.py # 回调处理器
+│ └── message_handlers.py # 消息处理器
+├── keyboards/
+│ ├── inline_keyboards.py # 内联键盘布局
+│ └── reply_keyboards.py # 回复键盘布局
+└── utils/
+ ├── logger.py # 日志
+ └── database.py # 数据库
+```
+
+**模块化示例(keyboards/inline_keyboards.py):**
+```python
+from telegram import InlineKeyboardButton, InlineKeyboardMarkup
+
+def get_main_menu():
+ """主菜单键盘"""
+ return InlineKeyboardMarkup([
+ [
+ InlineKeyboardButton("📊 数据", callback_data="data"),
+ InlineKeyboardButton("⚙️ 设置", callback_data="settings"),
+ ],
+ [InlineKeyboardButton("📚 帮助", callback_data="help")],
+ ])
+
+def get_data_menu():
+ """数据菜单键盘"""
+ return InlineKeyboardMarkup([
+ [
+ InlineKeyboardButton("📈 实时", callback_data="data_realtime"),
+ InlineKeyboardButton("📊 历史", callback_data="data_history"),
+ ],
+ [InlineKeyboardButton("⬅️ 返回", callback_data="back")],
+ ])
+```
+
+---
+
+## 最佳实践
+
+### 1. Handler 优先级
+
+```python
+# 先注册先匹配,按从特殊到通用的顺序
+app.add_handler(CommandHandler("start", start)) # 1. 特定命令
+app.add_handler(CallbackQueryHandler(callback)) # 2. 回调查询
+app.add_handler(ConversationHandler(...)) # 3. 对话流程
+app.add_handler(MessageHandler(filters.TEXT, text_msg)) # 4. 通用消息(最后)
+```
+
+### 2. 错误处理
+
+```python
+async def error_handler(update: Update, context: ContextTypes.DEFAULT_TYPE):
+ """全局错误处理"""
+ logger.error(f"更新 {update} 引起错误", exc_info=context.error)
+
+ # 通知用户
+ if update and update.effective_message:
+ await update.effective_message.reply_text("操作失败,请重试")
+
+app.add_error_handler(error_handler)
+```
+
+### 3. 回调数据管理
+
+```python
+# 使用结构化的 callback_data
+callback_data = "action:page:item" # 例如 "view:1:product_123"
+
+# 解析回调数据
+async def callback(update: Update, context: ContextTypes.DEFAULT_TYPE):
+ query = update.callback_query
+ parts = query.data.split(":")
+ action, page, item = parts
+
+ if action == "view":
+ await show_item(query, page, item)
+```
+
+### 4. 键盘设计原则
+
+- **简洁**:每行最多 2-3 个按钮
+- **清晰**:使用 emoji 增强识别度
+- **一致**:保持统一的布局风格
+- **响应**:及时反馈用户操作
+
+### 5. 安全考虑
+
+```python
+# 验证用户权限
+ADMIN_IDS = [123456789]
+
+async def admin_only(update: Update, context: ContextTypes.DEFAULT_TYPE):
+ user_id = update.effective_user.id
+ if user_id not in ADMIN_IDS:
+ await update.message.reply_text("无权限")
+ return
+
+ # 执行管理员操作
+```
+
+### 6. 部署方案
+
+**Webhook(推荐生产环境):**
+```python
+from flask import Flask, request
+
+app_flask = Flask(__name__)
+
+@app_flask.route('/webhook', methods=['POST'])
+def webhook():
+ update = Update.de_json(request.get_json(), bot)
+ application.update_queue.put(update)
+ return "OK"
+
+# 设置 webhook
+bot.set_webhook(f"https://yourdomain.com/webhook")
+```
+
+**Systemd Service(Linux):**
+```ini
+[Unit]
+Description=Telegram Bot
+After=network.target
+
+[Service]
+Type=simple
+User=your_user
+WorkingDirectory=/path/to/bot
+ExecStart=/path/to/venv/bin/python bot.py
+Restart=always
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### 7. 常用库版本
+
+```txt
+# requirements.txt
+python-telegram-bot==20.7
+python-dotenv==1.0.0
+aiosqlite==0.19.0
+httpx==0.25.2
+```
+
+---
+
+## 快速参考
+
+### Inline Keyboard 按钮类型
+
+```python
+InlineKeyboardButton("文本", callback_data="data") # 回调按钮
+InlineKeyboardButton("链接", url="https://...") # URL按钮
+InlineKeyboardButton("切换", switch_inline_query="") # 内联查询
+InlineKeyboardButton("登录", login_url=...) # 登录按钮
+InlineKeyboardButton("支付", pay=True) # 支付按钮
+InlineKeyboardButton("应用", web_app=WebAppInfo(...)) # Mini App
+```
+
+### 常用事件类型
+
+- `events.NewMessage` - 新消息
+- `events.CallbackQuery` - 回调查询
+- `events.InlineQuery` - 内联查询
+- `events.ChatAction` - 群组动作
+
+---
+
+**这份指南涵盖了 Telegram Bot 按钮和键盘的所有核心实现!**
diff --git a/skills/telegram-dev/references/index.md b/skills/telegram-dev/references/index.md
new file mode 100644
index 0000000..960c183
--- /dev/null
+++ b/skills/telegram-dev/references/index.md
@@ -0,0 +1,470 @@
+# Telegram 生态开发资源索引
+
+## 官方文档
+
+### Bot API
+**主文档:** https://core.telegram.org/bots/api
+**描述:** Telegram Bot API 完整参考文档
+
+**核心功能:**
+- 消息发送和接收
+- 媒体文件处理
+- 内联模式
+- 支付集成
+- Webhook 配置
+- 游戏和投票
+
+### Mini Apps (Web Apps)
+**主文档:** https://core.telegram.org/bots/webapps
+**完整平台:** https://docs.telegram-mini-apps.com
+**描述:** Telegram 小程序开发文档
+
+**核心功能:**
+- WebApp API
+- 主题和 UI 控件
+- 存储(Cloud/Device/Secure)
+- 生物识别认证
+- 位置和传感器
+- 支付集成
+
+### Telegram API & MTProto
+**主文档:** https://core.telegram.org
+**描述:** 完整的 Telegram 协议和客户端开发
+
+**核心功能:**
+- MTProto 协议
+- TDLib 客户端库
+- 认证和加密
+- 文件操作
+- Secret Chats
+
+## 官方 GitHub 仓库
+
+### Bot API 服务器
+**仓库:** https://github.com/tdlib/telegram-bot-api
+**描述:** Telegram Bot API 服务器实现
+**特点:**
+- 本地模式部署
+- 支持大文件(最高 2000 MB)
+- C++ 实现
+- TDLib 基础
+
+### Android 客户端
+**仓库:** https://github.com/DrKLO/Telegram
+**描述:** 官方 Android 客户端源代码
+**特点:**
+- 完整的 Android 实现
+- Material Design
+- 可自定义编译
+
+### Desktop 客户端
+**仓库:** https://github.com/telegramdesktop/tdesktop
+**描述:** 官方桌面客户端 (Windows, macOS, Linux)
+**特点:**
+- Qt/C++ 实现
+- 跨平台支持
+- 完整功能
+
+### 官方组织
+**组织页面:** https://github.com/orgs/TelegramOfficial/repositories
+**包含:**
+- Beta 版本
+- 支持工具
+- 示例代码
+
+## API 方法分类
+
+### 更新管理
+- `getUpdates` - 长轮询
+- `setWebhook` - 设置 Webhook
+- `deleteWebhook` - 删除 Webhook
+- `getWebhookInfo` - Webhook 信息
+
+### 消息操作
+**发送消息:**
+- `sendMessage` - 文本消息
+- `sendPhoto` - 图片
+- `sendVideo` - 视频
+- `sendDocument` - 文档
+- `sendAudio` - 音频
+- `sendVoice` - 语音
+- `sendLocation` - 位置
+- `sendVenue` - 地点
+- `sendContact` - 联系人
+- `sendPoll` - 投票
+- `sendDice` - 骰子/飞镖
+
+**编辑消息:**
+- `editMessageText` - 编辑文本
+- `editMessageCaption` - 编辑标题
+- `editMessageMedia` - 编辑媒体
+- `editMessageReplyMarkup` - 编辑键盘
+- `deleteMessage` - 删除消息
+
+**其他操作:**
+- `forwardMessage` - 转发消息
+- `copyMessage` - 复制消息
+- `sendChatAction` - 发送动作(输入中...)
+
+### 文件操作
+- `getFile` - 获取文件信息
+- 文件下载 URL: `https://api.telegram.org/file/bot/`
+- 文件上传:支持 multipart/form-data
+- 最大文件:50 MB (标准), 2000 MB (本地 Bot API)
+
+### 内联模式
+- `answerInlineQuery` - 响应内联查询
+- 结果类型:article, photo, gif, video, audio, voice, document, location, venue, contact, game, sticker
+
+### 回调查询
+- `answerCallbackQuery` - 响应按钮点击
+- 可显示通知或警告
+
+### 支付
+- `sendInvoice` - 发送发票
+- `answerPreCheckoutQuery` - 预结账
+- `answerShippingQuery` - 配送查询
+- 支持提供商:Stripe, Yandex.Money, Telegram Stars
+
+### 游戏
+- `sendGame` - 发送游戏
+- `setGameScore` - 设置分数
+- `getGameHighScores` - 获取排行榜
+
+### 群组管理
+- `kickChatMember` / `unbanChatMember` - 封禁/解封
+- `restrictChatMember` - 限制权限
+- `promoteChatMember` - 提升管理员
+- `setChatTitle` / `setChatDescription` - 设置信息
+- `setChatPhoto` - 设置头像
+- `pinChatMessage` / `unpinChatMessage` - 置顶消息
+
+## Mini Apps API 详解
+
+### 初始化
+```javascript
+const tg = window.Telegram.WebApp;
+tg.ready();
+tg.expand();
+```
+
+### 主要对象
+- **WebApp** - 主接口
+- **MainButton** - 主按钮
+- **SecondaryButton** - 次要按钮
+- **BackButton** - 返回按钮
+- **SettingsButton** - 设置按钮
+- **HapticFeedback** - 触觉反馈
+- **CloudStorage** - 云存储
+- **BiometricManager** - 生物识别
+- **LocationManager** - 位置服务
+- **Accelerometer** - 加速度计
+- **Gyroscope** - 陀螺仪
+- **DeviceOrientation** - 设备方向
+
+### 事件系统
+40+ 事件包括:
+- `themeChanged` - 主题改变
+- `viewportChanged` - 视口改变
+- `mainButtonClicked` - 主按钮点击
+- `backButtonClicked` - 返回按钮点击
+- `settingsButtonClicked` - 设置按钮点击
+- `invoiceClosed` - 支付完成
+- `popupClosed` - 弹窗关闭
+- `qrTextReceived` - 扫码结果
+- `clipboardTextReceived` - 剪贴板文本
+- `writeAccessRequested` - 写入权限请求
+- `contactRequested` - 联系人请求
+
+### 主题参数
+```javascript
+tg.themeParams = {
+ bg_color, // 背景色
+ text_color, // 文本色
+ hint_color, // 提示色
+ link_color, // 链接色
+ button_color, // 按钮色
+ button_text_color, // 按钮文本色
+ secondary_bg_color, // 次要背景色
+ header_bg_color, // 头部背景色
+ accent_text_color, // 强调文本色
+ section_bg_color, // 区块背景色
+ section_header_text_color, // 区块头文本色
+ subtitle_text_color, // 副标题色
+ destructive_text_color // 危险操作色
+}
+```
+
+## 开发工具
+
+### @BotFather 命令
+创建和管理 Bot 的核心工具:
+
+**Bot 管理:**
+- `/newbot` - 创建新 Bot
+- `/mybots` - 管理我的 Bots
+- `/deletebot` - 删除 Bot
+- `/token` - 重新生成 token
+
+**设置命令:**
+- `/setname` - 设置名称
+- `/setdescription` - 设置描述
+- `/setabouttext` - 设置关于文本
+- `/setuserpic` - 设置头像
+
+**功能配置:**
+- `/setcommands` - 设置命令列表
+- `/setinline` - 启用内联模式
+- `/setinlinefeedback` - 内联反馈
+- `/setjoingroups` - 允许加入群组
+- `/setprivacy` - 隐私模式
+
+**支付和游戏:**
+- `/setgamescores` - 游戏分数
+- `/setpayments` - 配置支付
+
+**Mini Apps:**
+- `/newapp` - 创建 Mini App
+- `/myapps` - 管理 Mini Apps
+- `/setmenubutton` - 设置菜单按钮
+
+### API ID 获取
+访问 https://my.telegram.org
+1. 登录账号
+2. 进入 API development tools
+3. 创建应用
+4. 获取 API ID 和 API Hash
+
+## 常用 Python 库
+
+### python-telegram-bot
+```bash
+pip install python-telegram-bot
+```
+
+**特点:**
+- 完整的 Bot API 包装
+- 异步和同步支持
+- 丰富的扩展
+- 活跃维护
+
+**基础示例:**
+```python
+from telegram import Update
+from telegram.ext import Application, CommandHandler, ContextTypes
+
+async def start(update: Update, context: ContextTypes.DEFAULT_TYPE):
+ await update.message.reply_text('你好!')
+
+app = Application.builder().token("TOKEN").build()
+app.add_handler(CommandHandler("start", start))
+app.run_polling()
+```
+
+### aiogram
+```bash
+pip install aiogram
+```
+
+**特点:**
+- 纯异步
+- 高性能
+- FSM 状态机
+- 中间件系统
+
+### Telethon / Pyrogram
+MTProto 客户端库:
+```bash
+pip install telethon
+pip install pyrogram
+```
+
+**用途:**
+- 自定义客户端
+- 用户账号自动化
+- 完整 Telegram 功能
+
+## 常用 Node.js 库
+
+### node-telegram-bot-api
+```bash
+npm install node-telegram-bot-api
+```
+
+### Telegraf
+```bash
+npm install telegraf
+```
+
+**特点:**
+- 现代化
+- 中间件架构
+- TypeScript 支持
+
+### grammY
+```bash
+npm install grammy
+```
+
+**特点:**
+- 轻量级
+- 类型安全
+- 插件生态
+
+## 部署选项
+
+### Webhook 托管
+**推荐平台:**
+- Heroku
+- AWS Lambda
+- Google Cloud Functions
+- Azure Functions
+- Vercel
+- Railway
+- Render
+
+**要求:**
+- HTTPS 支持
+- 公网可访问
+- 支持的端口:443, 80, 88, 8443
+
+### 长轮询托管
+**推荐平台:**
+- VPS (Vultr, DigitalOcean, Linode)
+- Raspberry Pi
+- 本地服务器
+
+**优点:**
+- 无需 HTTPS
+- 简单配置
+- 适合开发测试
+
+## 安全最佳实践
+
+1. **Token 安全**
+ - 不要提交到 Git
+ - 使用环境变量
+ - 定期轮换
+
+2. **数据验证**
+ - 验证 initData
+ - 服务器端验证
+ - 不信任客户端
+
+3. **权限控制**
+ - 检查用户权限
+ - 管理员验证
+ - 群组权限
+
+4. **速率限制**
+ - 实现请求限制
+ - 防止滥用
+ - 监控异常
+
+## 调试技巧
+
+### Bot 调试
+```python
+import logging
+logging.basicConfig(level=logging.DEBUG)
+```
+
+### Mini App 调试
+```javascript
+// 开启调试模式
+tg.showAlert(JSON.stringify(tg.initDataUnsafe, null, 2));
+
+// 控制台日志
+console.log('WebApp version:', tg.version);
+console.log('Platform:', tg.platform);
+console.log('Theme:', tg.colorScheme);
+```
+
+### Webhook 测试
+使用 ngrok 本地测试:
+```bash
+ngrok http 5000
+# 将生成的 https URL 设置为 webhook
+```
+
+## 社区资源
+
+- **Telegram 开发者群组**: @BotDevelopers
+- **Telegram API 讨论**: @TelegramBots
+- **Mini Apps 讨论**: @WebAppChat
+
+## 更新日志
+
+**最新功能:**
+- Paid Media (付费媒体)
+- Checklist Tasks (检查列表任务)
+- Gift Conversion (礼物转换)
+- Business Features (商业功能)
+- Poll 选项增加到 12 个
+- Story 发布和编辑
+
+---
+
+## 完整实现模板 (新增)
+
+### Telegram Bot 按钮和键盘实现指南
+**文件:** `Telegram_Bot_按钮和键盘实现模板.md`
+**行数:** 404 行
+**大小:** 12 KB
+**语言:** 中文
+
+精简实用的 Telegram Bot 交互式功能实现指南:
+
+**核心内容:**
+- 三种按钮类型详解(Inline/Reply/Command Menu)
+- python-telegram-bot 和 Telethon 双实现对比
+- 完整的代码示例(即拿即用)
+- 项目结构和模块化设计
+- Handler 优先级和事件处理
+- 生产环境部署方案
+- 安全和错误处理最佳实践
+
+**特色:**
+- 核心代码精简,去除冗余示例
+- 聚焦常用场景和实用技巧
+- 完整的快速参考表
+
+---
+
+### 动态视图对齐 - 数据展示指南
+**文件:** `动态视图对齐实现文档.md`
+**行数:** 407 行
+**大小:** 12 KB
+**语言:** 中文
+
+专业的等宽字体数据对齐和格式化方案:
+
+**核心功能:**
+- 智能动态视图对齐算法(三步法)
+- 自动计算列宽,无需硬编码
+- 智能对齐规则(文本左,数字右)
+- 完整的格式化系统:
+ - 交易量智能缩写(B/M/K)
+ - 价格智能精度(自适应小数位)
+ - 涨跌幅格式化(+/- 符号)
+ - 资金流向智能显示
+
+**应用场景:**
+- 排行榜、数据表格、实时行情
+- 任何需要专业数据展示的 Telegram Bot
+
+**技术特点:**
+- O(n×m) 线性复杂度,高效实用
+- 1000 行数据处理仅需 5-10ms
+- 支持中文字符宽度扩展
+
+**视觉效果示例:**
+```
+1. BTC $1.23B $45,000 +5.23%
+2. ETH $890.5M $2,500 +3.12%
+3. SOL $567.8M $101 +8.45%
+```
+
+---
+
+**这些模板提供了从基础到生产级别的完整 Telegram Bot 开发解决方案!**
diff --git a/skills/telegram-dev/references/动态视图对齐实现文档.md b/skills/telegram-dev/references/动态视图对齐实现文档.md
new file mode 100644
index 0000000..2cdeda4
--- /dev/null
+++ b/skills/telegram-dev/references/动态视图对齐实现文档.md
@@ -0,0 +1,407 @@
+# 📊 动态视图对齐 - Telegram 数据展示指南
+
+> 专业的等宽字体数据对齐和格式化方案
+
+---
+
+## 📑 目录
+
+- [核心原理](#核心原理)
+- [实现代码](#实现代码)
+- [格式化系统](#格式化系统)
+- [应用示例](#应用示例)
+- [最佳实践](#最佳实践)
+
+---
+
+## 核心原理
+
+### 问题场景
+
+在 Telegram Bot 中展示排行榜、数据表格时,需要在等宽字体环境(代码块)中实现完美对齐:
+
+**❌ 未对齐:**
+```
+1. BTC $1.23B $45000 +5.23%
+10. DOGE $123.4M $0.0789 -1.45%
+```
+
+**✅ 动态对齐:**
+```
+1. BTC $1.23B $45,000 +5.23%
+10. DOGE $123.4M $0.0789 -1.45%
+```
+
+### 三步对齐算法
+
+```
+步骤 1: 扫描数据,计算每列最大宽度
+步骤 2: 根据列类型应用对齐规则(文本左对齐,数字右对齐)
+步骤 3: 拼接成最终文本
+```
+
+### 对齐规则
+
+| 列索引 | 数据类型 | 对齐方式 | 示例 |
+|--------|----------|----------|------|
+| 列 0 | 序号 | 左对齐 | `1. `, `10. ` |
+| 列 1 | 符号 | 左对齐 | `BTC `, `DOGE ` |
+| 列 2+ | 数值 | 右对齐 | ` $1.23B`, `$123.4M` |
+
+---
+
+## 实现代码
+
+### 核心函数
+
+```python
+def dynamic_align_format(data_rows):
+ """
+ 动态视图对齐格式化
+
+ 参数:
+ data_rows: 二维列表 [["1.", "BTC", "$1.23B", ...], ...]
+
+ 返回:
+ 对齐后的文本字符串
+ """
+ if not data_rows:
+ return "暂无数据"
+
+ # ========== 步骤 1: 计算每列最大宽度 ==========
+ max_widths = []
+ for row in data_rows:
+ for i, cell in enumerate(row):
+ # 动态扩展列表
+ if i >= len(max_widths):
+ max_widths.append(0)
+ # 更新最大宽度
+ max_widths[i] = max(max_widths[i], len(str(cell)))
+
+ # ========== 步骤 2: 格式化每一行 ==========
+ formatted_rows = []
+ for row in data_rows:
+ formatted_cells = []
+ for i, cell in enumerate(row):
+ cell_str = str(cell)
+
+ if i == 0 or i == 1:
+ # 序号列和符号列 - 左对齐
+ formatted_cells.append(cell_str.ljust(max_widths[i]))
+ else:
+ # 数值列 - 右对齐
+ formatted_cells.append(cell_str.rjust(max_widths[i]))
+
+ # 用空格连接所有单元格
+ formatted_line = ' '.join(formatted_cells)
+ formatted_rows.append(formatted_line)
+
+ # ========== 步骤 3: 拼接成最终文本 ==========
+ return '\n'.join(formatted_rows)
+```
+
+### 使用示例
+
+```python
+# 准备数据
+data_rows = [
+ ["1.", "BTC", "$1.23B", "$45,000", "+5.23%"],
+ ["2.", "ETH", "$890.5M", "$2,500", "+3.12%"],
+ ["10.", "DOGE", "$123.4M", "$0.0789", "-1.45%"]
+]
+
+# 调用对齐函数
+aligned_text = dynamic_align_format(data_rows)
+
+# 输出到 Telegram
+text = f"""📊 排行榜
+```
+{aligned_text}
+```
+💡 说明文字"""
+```
+
+---
+
+## 格式化系统
+
+### 1. 交易量智能缩写
+
+```python
+def format_volume(volume: float) -> str:
+ """智能格式化交易量"""
+ if volume >= 1e9:
+ return f"${volume/1e9:.2f}B" # 十亿 → $1.23B
+ elif volume >= 1e6:
+ return f"${volume/1e6:.2f}M" # 百万 → $890.5M
+ elif volume >= 1e3:
+ return f"${volume/1e3:.2f}K" # 千 → $123.4K
+ else:
+ return f"${volume:.2f}" # 小数 → $45.67
+```
+
+**示例:**
+```python
+format_volume(1234567890) # → "$1.23B"
+format_volume(890500000) # → "$890.5M"
+format_volume(123400) # → "$123.4K"
+```
+
+### 2. 价格智能精度
+
+```python
+def format_price(price: float) -> str:
+ """智能格式化价格 - 根据大小自动调整小数位"""
+ if price >= 1000:
+ return f"${price:,.0f}" # 千元以上 → $45,000
+ elif price >= 1:
+ return f"${price:.3f}" # 1-1000 → $2.500
+ elif price >= 0.01:
+ return f"${price:.4f}" # 0.01-1 → $0.0789
+ else:
+ return f"${price:.6f}" # <0.01 → $0.000123
+```
+
+### 3. 涨跌幅格式化
+
+```python
+def format_change(change_percent: float) -> str:
+ """格式化涨跌幅 - 正数添加+号"""
+ if change_percent >= 0:
+ return f"+{change_percent:.2f}%"
+ else:
+ return f"{change_percent:.2f}%"
+```
+
+**示例:**
+```python
+format_change(5.234) # → "+5.23%"
+format_change(-1.456) # → "-1.46%"
+format_change(0) # → "+0.00%"
+```
+
+### 4. 资金流向智能显示
+
+```python
+def format_flow(net_flow: float) -> str:
+ """格式化资金净流向"""
+ sign = "+" if net_flow >= 0 else ""
+ abs_flow = abs(net_flow)
+
+ if abs_flow >= 1e9:
+ return f"{sign}{net_flow/1e9:.2f}B"
+ elif abs_flow >= 1e6:
+ return f"{sign}{net_flow/1e6:.2f}M"
+ elif abs_flow >= 1e3:
+ return f"{sign}{net_flow/1e3:.2f}K"
+ else:
+ return f"{sign}{net_flow:.0f}"
+```
+
+---
+
+## 应用示例
+
+### 完整排行榜实现
+
+```python
+def get_volume_ranking(data, limit=10):
+ """获取交易量排行榜"""
+
+ # 1. 数据处理和排序
+ sorted_data = sorted(data, key=lambda x: x['volume'], reverse=True)[:limit]
+
+ # 2. 准备数据行
+ data_rows = []
+ for i, item in enumerate(sorted_data, 1):
+ symbol = item['symbol']
+ volume = item['volume']
+ price = item['price']
+ change = item['change_percent']
+
+ # 格式化各列
+ volume_str = format_volume(volume)
+ price_str = format_price(price)
+ change_str = format_change(change)
+
+ # 添加到数据行
+ data_rows.append([
+ f"{i}.", # 序号
+ symbol, # 币种
+ volume_str, # 交易量
+ price_str, # 价格
+ change_str # 涨跌幅
+ ])
+
+ # 3. 动态对齐格式化
+ aligned_data = dynamic_align_format(data_rows)
+
+ # 4. 构建最终消息
+ text = f"""🎪 热币排行 - 交易量榜 🎪
+⏰ 更新 {datetime.now().strftime('%Y-%m-%d %H:%M')}
+📊 排序 24小时交易量(USDT) / 降序
+排名/币种/24h交易量/价格/24h涨跌
+```
+{aligned_data}
+```
+💡 交易量反映市场活跃度和流动性"""
+
+ return text
+```
+
+### 输出效果
+
+```
+🎪 热币排行 - 交易量榜 🎪
+⏰ 更新 2025-10-29 14:30
+📊 排序 24小时交易量(USDT) / 降序
+排名/币种/24h交易量/价格/24h涨跌
+
+1. BTC $1.23B $45,000 +5.23%
+2. ETH $890.5M $2,500 +3.12%
+3. SOL $567.8M $101 +8.45%
+4. BNB $432.1M $315 +2.67%
+5. XRP $345.6M $0.589 -1.23%
+
+💡 交易量反映市场活跃度和流动性
+```
+
+---
+
+## 最佳实践
+
+### 1. 数据准备规范
+
+```python
+# ✅ 推荐:使用列表嵌套结构
+data_rows = [
+ ["1.", "BTC", "$1.23B", "$45,000", "+5.23%"],
+ ["2.", "ETH", "$890.5M", "$2,500", "+3.12%"]
+]
+
+# ❌ 不推荐:使用字典(需要额外转换)
+data_rows = [
+ {"rank": 1, "symbol": "BTC", ...},
+]
+```
+
+### 2. 格式化顺序
+
+```python
+# ✅ 推荐:先格式化,再对齐
+for i, item in enumerate(data, 1):
+ volume_str = format_volume(item['volume']) # 格式化
+ price_str = format_price(item['price']) # 格式化
+ change_str = format_change(item['change']) # 格式化
+
+ data_rows.append([f"{i}.", symbol, volume_str, price_str, change_str])
+
+aligned_data = dynamic_align_format(data_rows) # 对齐
+```
+
+### 3. Telegram 消息嵌入
+
+```python
+# ✅ 推荐:使用代码块包裹对齐数据
+text = f"""📊 排行榜标题
+⏰ 更新时间 {time}
+```
+{aligned_data}
+```
+💡 说明文字"""
+
+# ❌ 不推荐:直接输出(Telegram会自动换行,破坏对齐)
+text = f"""📊 排行榜标题
+{aligned_data}
+💡 说明文字"""
+```
+
+### 4. 空数据处理
+
+```python
+# ✅ 推荐:在函数开头检查
+def dynamic_align_format(data_rows):
+ if not data_rows:
+ return "暂无数据"
+ # ... 正常处理逻辑 ...
+```
+
+### 5. 性能优化
+
+```python
+# ✅ 推荐:限制数据量
+sorted_data = sorted(data, key=lambda x: x['volume'], reverse=True)[:limit]
+aligned_data = dynamic_align_format(data_rows)
+
+# ❌ 不推荐:处理全量后截取(浪费资源)
+aligned_data = dynamic_align_format(all_data_rows)
+final_data = aligned_data.split('\n')[:limit]
+```
+
+### 6. 中文字符支持(可选)
+
+```python
+def get_display_width(text):
+ """计算文本显示宽度(中文=2,英文=1)"""
+ width = 0
+ for char in text:
+ if ord(char) > 127: # 非ASCII字符
+ width += 2
+ else:
+ width += 1
+ return width
+
+# 在 dynamic_align_format 中使用
+max_widths[i] = max(max_widths[i], get_display_width(str(cell)))
+```
+
+---
+
+## 设计优势
+
+### 与硬编码方式对比
+
+| 特性 | 传统硬编码 | 动态对齐 |
+|------|-----------|---------|
+| 列宽适配 | 手动指定 | 自动计算 |
+| 维护成本 | 高(需多处修改) | 低(一次编写) |
+| 对齐精度 | 易出偏差 | 字符级精确 |
+| 扩展性 | 需重构 | 自动支持任意列 |
+| 性能 | O(n) | O(n×m) |
+
+### 技术亮点
+
+- **自适应宽度**: 无论数据如何变化,始终完美对齐
+- **智能对齐规则**: 符合人类阅读习惯(文本左,数字右)
+- **等宽字体完美支持**: 空格填充确保对齐效果
+- **高复用性**: 一个函数适用所有排行榜场景
+
+---
+
+## 快速参考
+
+### 函数签名
+
+```python
+dynamic_align_format(data_rows: list[list]) -> str
+format_volume(volume: float) -> str
+format_price(price: float) -> str
+format_change(change_percent: float) -> str
+format_flow(net_flow: float) -> str
+```
+
+### 时间复杂度
+
+- 宽度计算: O(n × m)
+- 格式化输出: O(n × m)
+- 总复杂度: O(n × m) - 线性时间,高效实用
+
+### 性能基准
+
+- 处理 100 行 × 5 列: ~1ms
+- 处理 1000 行 × 5 列: ~5-10ms
+- 内存占用: 最小
+
+---
+
+**这份指南提供了 Telegram Bot 专业数据展示的完整解决方案!**
diff --git a/skills/timescaledb/SKILL.md b/skills/timescaledb/SKILL.md
new file mode 100644
index 0000000..e6880aa
--- /dev/null
+++ b/skills/timescaledb/SKILL.md
@@ -0,0 +1,108 @@
+---
+name: timescaledb
+description: TimescaleDB - PostgreSQL extension for high-performance time-series and event data analytics, hypertables, continuous aggregates, compression, and real-time analytics
+---
+
+# Timescaledb Skill
+
+Comprehensive assistance with timescaledb development, generated from official documentation.
+
+## When to Use This Skill
+
+This skill should be triggered when:
+- Working with timescaledb
+- Asking about timescaledb features or APIs
+- Implementing timescaledb solutions
+- Debugging timescaledb code
+- Learning timescaledb best practices
+
+## Quick Reference
+
+### Common Patterns
+
+*Quick reference patterns will be added as you use the skill.*
+
+### Example Code Patterns
+
+**Example 1** (bash):
+```bash
+rails new my_app -d=postgresql
+ cd my_app
+```
+
+**Example 2** (ruby):
+```ruby
+gem 'timescaledb'
+```
+
+**Example 3** (shell):
+```shell
+kubectl create namespace timescale
+```
+
+**Example 4** (shell):
+```shell
+kubectl config set-context --current --namespace=timescale
+```
+
+**Example 5** (sql):
+```sql
+DROP EXTENSION timescaledb;
+```
+
+## Reference Files
+
+This skill includes comprehensive documentation in `references/`:
+
+- **api.md** - Api documentation
+- **compression.md** - Compression documentation
+- **continuous_aggregates.md** - Continuous Aggregates documentation
+- **getting_started.md** - Getting Started documentation
+- **hyperfunctions.md** - Hyperfunctions documentation
+- **hypertables.md** - Hypertables documentation
+- **installation.md** - Installation documentation
+- **other.md** - Other documentation
+- **performance.md** - Performance documentation
+- **time_buckets.md** - Time Buckets documentation
+- **tutorials.md** - Tutorials documentation
+
+Use `view` to read specific reference files when detailed information is needed.
+
+## Working with This Skill
+
+### For Beginners
+Start with the getting_started or tutorials reference files for foundational concepts.
+
+### For Specific Features
+Use the appropriate category reference file (api, guides, etc.) for detailed information.
+
+### For Code Examples
+The quick reference section above contains common patterns extracted from the official docs.
+
+## Resources
+
+### references/
+Organized documentation extracted from official sources. These files contain:
+- Detailed explanations
+- Code examples with language annotations
+- Links to original documentation
+- Table of contents for quick navigation
+
+### scripts/
+Add helper scripts here for common automation tasks.
+
+### assets/
+Add templates, boilerplate, or example projects here.
+
+## Notes
+
+- This skill was automatically generated from official documentation
+- Reference files preserve the structure and examples from source docs
+- Code examples include language detection for better syntax highlighting
+- Quick reference patterns are extracted from common usage examples in the docs
+
+## Updating
+
+To refresh this skill with updated documentation:
+1. Re-run the scraper with the same configuration
+2. The skill will be rebuilt with the latest information
diff --git a/skills/timescaledb/references/api.md b/skills/timescaledb/references/api.md
new file mode 100644
index 0000000..824f5ed
--- /dev/null
+++ b/skills/timescaledb/references/api.md
@@ -0,0 +1,2195 @@
+# Timescaledb - Api
+
+**Pages:** 100
+
+---
+
+## UUIDv7 functions
+
+**URL:** llms-txt#uuidv7-functions
+
+**Contents:**
+- Examples
+- Functions
+
+UUIDv7 is a time-ordered UUID that includes a Unix timestamp (with millisecond precision) in its first 48 bits. Like
+other UUIDs, it uses 6 bits for version and variant info, and the remaining 74 bits are random.
+
+
+
+UUIDv7 is ideal anywhere you create lots of records over time, not only observability. Advantages are:
+
+- **No extra column required to partition by time with sortability**: you can sort UUIDv7 instances by their value. This
+ is useful for ordering records by creation time without the need for a separate timestamp column.
+- **Indexing performance**: UUIDv7s increase with time, so new rows append near the end of a B-tree instead of
+ This results in fewer page splits, less fragmentation, faster inserts, and efficient time-range scans.
+- **Easy keyset pagination**: `WHERE id > :cursor` and natural sharding.
+- **UUID**: safe across services, replicas, and unique across distributed systems.
+
+UUIDv7 also increases query speed by reducing the number of chunks scanned during queries. For example, in a database
+with 25 million rows, the following query runs in 25 seconds:
+
+Using UUIDv7 excludes chunks at startup and reduces the query time to 550ms:
+
+You use UUIDvs for events, orders, messages, uploads, runs, jobs, spans, and more.
+
+- **High-rate event logs for observability and metrics**:
+
+UUIDv7 gives you globally unique IDs (for traceability) and time windows (“last hour”), without the need for a
+ separate `created_at` column. UUIDv7 create less churn because inserts land at the end of the index, and you can
+ filter by time using UUIDv7 objects.
+
+- Last hour:
+
+ - Keyset pagination
+
+- **Workflow / durable execution runs**:
+
+Each run needs a stable ID for joins and retries, and you often ask “what started since X?”. UUIDs help by serving
+ both as the primary key and a time cursor across services. For example:
+
+- **Orders / activity feeds / messages (SaaS apps)**:
+
+Human-readable timestamps are not mandatory in a table. However, you still need time-ordered pages and day/week ranges.
+ UUIDv7 enables clean date windows and cursor pagination with just the ID. For example:
+
+- [generate_uuidv7()][generate_uuidv7]: generate a version 7 UUID based on current time
+- [to_uuidv7()][to_uuidv7]: create a version 7 UUID from a PostgreSQL timestamp
+- [to_uuidv7_boundary()][to_uuidv7_boundary]: create a version 7 "boundary" UUID from a PostgreSQL timestamp
+- [uuid_timestamp()][uuid_timestamp]: extract a PostgreSQL timestamp from a version 7 UUID
+- [uuid_timestamp_micros()][uuid_timestamp_micros]: extract a PostgreSQL timestamp with microsecond precision from a version 7 UUID
+- [uuid_version()][uuid_version]: extract the version of a UUID
+
+===== PAGE: https://docs.tigerdata.com/api/approximate_row_count/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+WITH ref AS (SELECT now() AS t0)
+SELECT count(*) AS cnt_ts_filter
+FROM events e, ref
+WHERE uuid_timestamp(e.event_id) >= ref.t0 - INTERVAL '2 days';
+```
+
+Example 2 (sql):
+```sql
+WITH ref AS (SELECT now() AS t0)
+SELECT count(*) AS cnt_boundary_filter
+FROM events e, ref
+WHERE e.event_id >= to_uuidv7_boundary(ref.t0 - INTERVAL '2 days')
+```
+
+Example 3 (sql):
+```sql
+SELECT count(*) FROM logs WHERE id >= to_uuidv7_boundary(now() - interval '1 hour');
+```
+
+Example 4 (sql):
+```sql
+SELECT * FROM logs WHERE id > to_uuidv7($last_seen'::timestamptz, true) ORDER BY id LIMIT 1000;
+```
+
+---
+
+## lttb()
+
+**URL:** llms-txt#lttb()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/saturating_add/ =====
+
+---
+
+## state_agg()
+
+**URL:** llms-txt#state_agg()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/state_agg/state_timeline/ =====
+
+---
+
+## compact_state_agg()
+
+**URL:** llms-txt#compact_state_agg()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/compact_state_agg/into_values/ =====
+
+---
+
+## vwap()
+
+**URL:** llms-txt#vwap()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/rollup/ =====
+
+---
+
+## interpolated_state_timeline()
+
+**URL:** llms-txt#interpolated_state_timeline()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/state_agg/interpolated_duration_in/ =====
+
+---
+
+## close()
+
+**URL:** llms-txt#close()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/open_time/ =====
+
+---
+
+## interpolated_downtime()
+
+**URL:** llms-txt#interpolated_downtime()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/min_n/min_n/ =====
+
+---
+
+## Frequency analysis
+
+**URL:** llms-txt#frequency-analysis
+
+This section includes frequency aggregate APIs, which find the most common elements out of a set of
+vastly more varied values.
+
+For these hyperfunctions, you need to install the [TimescaleDB Toolkit][install-toolkit] Postgres extension.
+
+
+
+===== PAGE: https://docs.tigerdata.com/api/informational-views/ =====
+
+---
+
+## stderror()
+
+**URL:** llms-txt#stderror()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/hyperloglog/approx_count_distinct/ =====
+
+---
+
+## tdigest()
+
+**URL:** llms-txt#tdigest()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/tdigest/mean/ =====
+
+---
+
+## volume()
+
+**URL:** llms-txt#volume()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/candlestick_agg/ =====
+
+---
+
+## high_time()
+
+**URL:** llms-txt#high_time()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/count_min_sketch/approx_count/ =====
+
+---
+
+## open()
+
+**URL:** llms-txt#open()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/low/ =====
+
+---
+
+## interpolated_average()
+
+**URL:** llms-txt#interpolated_average()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/time_weight/average/ =====
+
+---
+
+## slope()
+
+**URL:** llms-txt#slope()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/num_elements/ =====
+
+---
+
+## irate_right()
+
+**URL:** llms-txt#irate_right()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/last_val/ =====
+
+---
+
+## trim_to()
+
+**URL:** llms-txt#trim_to()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/intro/ =====
+
+Given a series of timestamped heartbeats and a liveness interval, determine the
+overall liveness of a system. This aggregate can be used to report total uptime
+or downtime as well as report the time ranges where the system was live or dead.
+
+It's also possible to combine multiple heartbeat aggregates to determine the
+overall health of a service. For example, the heartbeat aggregates from a
+primary and standby server could be combined to see if there was ever a window
+where both machines were down at the same time.
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/dead_ranges/ =====
+
+---
+
+## irate_left()
+
+**URL:** llms-txt#irate_left()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/num_changes/ =====
+
+---
+
+## interpolated_delta()
+
+**URL:** llms-txt#interpolated_delta()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/counter_zero_time/ =====
+
+---
+
+## counter_zero_time()
+
+**URL:** llms-txt#counter_zero_time()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/irate_left/ =====
+
+---
+
+## Tiger Cloud REST API reference
+
+**URL:** llms-txt#tiger-cloud-rest-api-reference
+
+**Contents:**
+- Overview
+- Authentication
+ - Basic Authentication
+ - Example
+- Service Management
+ - List All Services
+ - Create a Service
+ - Get a Service
+ - Delete a Service
+ - Resize a Service
+
+A comprehensive RESTful API for managing Tiger Cloud resources including VPCs, services, and read replicas.
+
+**API Version:** 1.0.0
+**Base URL:** `https://console.cloud.timescale.com/public/api/v1`
+
+The Tiger REST API uses HTTP Basic Authentication. Include your access key and secret key in the Authorization header.
+
+### Basic Authentication
+
+## Service Management
+
+You use this endpoint to create a Tiger Cloud service with one of more of the following addons:
+
+- `time-series`: a Tiger Cloud service optimized for real-time analytics. For time-stamped data like events,
+ prices, metrics, sensor readings, or any information that changes over time.
+- `ai`: a Tiger Cloud service instance with vector extensions.
+
+To have multiple addons when you create a new service, set `"addons": ["time-series", "ai"]`. To create a
+vanilla Postgres instance, set `addons` to an empty list `[]`.
+
+### List All Services
+
+Retrieve all services within a project.
+
+**Response:** `200 OK`
+
+Create a new Tiger Cloud service. This is an asynchronous operation.
+
+**Response:** `202 Accepted`
+
+**Service Types:**
+- `TIMESCALEDB`: a Tiger Cloud service instance optimized for real-time analytics service For time-stamped data like events,
+ prices, metrics, sensor readings, or any information that changes over time
+- `POSTGRES`: a vanilla Postgres instance
+- `VECTOR`: a Tiger Cloud service instance with vector extensions
+
+Retrieve details of a specific service.
+
+**Response:** `200 OK`
+
+**Service Status:**
+- `QUEUED`: Service creation is queued
+- `DELETING`: Service is being deleted
+- `CONFIGURING`: Service is being configured
+- `READY`: Service is ready for use
+- `DELETED`: Service has been deleted
+- `UNSTABLE`: Service is in an unstable state
+- `PAUSING`: Service is being paused
+- `PAUSED`: Service is paused
+- `RESUMING`: Service is being resumed
+- `UPGRADING`: Service is being upgraded
+- `OPTIMIZING`: Service is being optimized
+
+Delete a specific service. This is an asynchronous operation.
+
+**Response:** `202 Accepted`
+
+Change CPU and memory allocation for a service.
+
+**Response:** `202 Accepted`
+
+### Update Service Password
+
+Set a new master password for the service.
+
+**Response:** `204 No Content`
+
+### Set Service Environment
+
+Set the environment type for the service.
+
+**Environment Values:**
+- `PROD`: Production environment
+- `DEV`: Development environment
+
+**Response:** `200 OK`
+
+### Configure High Availability
+
+Change the HA configuration for a service. This is an asynchronous operation.
+
+**Response:** `202 Accepted`
+
+### Connection Pooler Management
+
+#### Enable Connection Pooler
+
+Activate the connection pooler for a service.
+
+**Response:** `200 OK`
+
+#### Disable Connection Pooler
+
+Deactivate the connection pooler for a service.
+
+**Response:** `200 OK`
+
+Create a new, independent service by taking a snapshot of an existing one.
+
+**Response:** `202 Accepted`
+
+Manage read replicas for improved read performance.
+
+### List Read Replica Sets
+
+Retrieve all read replica sets associated with a primary service.
+
+**Response:** `200 OK`
+
+**Replica Set Status:**
+- `creating`: Replica set is being created
+- `active`: Replica set is active and ready
+- `resizing`: Replica set is being resized
+- `deleting`: Replica set is being deleted
+- `error`: Replica set encountered an error
+
+### Create a Read Replica Set
+
+Create a new read replica set. This is an asynchronous operation.
+
+**Response:** `202 Accepted`
+
+### Delete a Read Replica Set
+
+Delete a specific read replica set. This is an asynchronous operation.
+
+**Response:** `202 Accepted`
+
+### Resize a Read Replica Set
+
+Change resource allocation for a read replica set. This operation is async.
+
+**Response:** `202 Accepted`
+
+### Read Replica Set Connection Pooler
+
+#### Enable Replica Set Pooler
+
+Activate the connection pooler for a read replica set.
+
+**Response:** `200 OK`
+
+#### Disable Replica Set Pooler
+
+Deactivate the connection pooler for a read replica set.
+
+**Response:** `200 OK`
+
+### Set Replica Set Environment
+
+Set the environment type for a read replica set.
+
+**Response:** `200 OK`
+
+Virtual Private Clouds (VPCs) provide network isolation for your TigerData services.
+
+List all Virtual Private Clouds in a project.
+
+**Response:** `200 OK`
+
+**Response:** `201 Created`
+
+Retrieve details of a specific VPC.
+
+**Response:** `200 OK`
+
+Update the name of a specific VPC.
+
+**Response:** `200 OK`
+
+Delete a specific VPC.
+
+**Response:** `204 No Content`
+
+Manage peering connections between VPCs across different accounts and regions.
+
+### List VPC Peerings
+
+Retrieve all VPC peering connections for a given VPC.
+
+**Response:** `200 OK`
+
+### Create VPC Peering
+
+Create a new VPC peering connection.
+
+**Response:** `201 Created`
+
+Retrieve details of a specific VPC peering connection.
+
+### Delete VPC Peering
+
+Delete a specific VPC peering connection.
+
+**Response:** `204 No Content`
+
+## Service VPC Operations
+
+### Attach Service to VPC
+
+Associate a service with a VPC.
+
+**Response:** `202 Accepted`
+
+### Detach Service from VPC
+
+Disassociate a service from its VPC.
+
+**Response:** `202 Accepted`
+
+### Read Replica Set Object
+
+Tiger Cloud REST API uses standard HTTP status codes and returns error details in JSON format.
+
+### Error Response Format
+
+### Common Error Codes
+- `400 Bad Request`: Invalid request parameters or malformed JSON
+- `401 Unauthorized`: Missing or invalid authentication credentials
+- `403 Forbidden`: Insufficient permissions for the requested operation
+- `404 Not Found`: Requested resource does not exist
+- `409 Conflict`: Request conflicts with current resource state
+- `500 Internal Server Error`: Unexpected server error
+
+### Example Error Response
+
+===== PAGE: https://docs.tigerdata.com/api/glossary/ =====
+
+**Examples:**
+
+Example 1 (http):
+```http
+Authorization: Basic
+```
+
+Example 2 (bash):
+```bash
+curl -X GET "https://console.cloud.timescale.com/public/api/v1/projects/{project_id}/services" \
+ -H "Authorization: Basic $(echo -n 'your_access_key:your_secret_key' | base64)"
+```
+
+Example 3 (http):
+```http
+GET /projects/{project_id}/services
+```
+
+Example 4 (json):
+```json
+[
+ {
+ "service_id": "p7zm9wqqii",
+ "project_id": "jz22xtzemv",
+ "name": "my-production-db",
+ "region_code": "eu-central-1",
+ "service_type": "TIMESCALEDB",
+ "status": "READY",
+ "created": "2024-01-15T10:30:00Z",
+ "paused": false,
+ "resources": [
+ {
+ "id": "resource-1",
+ "spec": {
+ "cpu_millis": 1000,
+ "memory_gbs": 4,
+ "volume_type": "gp2"
+ }
+ }
+ ],
+ "endpoint": {
+ "host": "my-service.com",
+ "port": 5432
+ }
+ }
+]
+```
+
+---
+
+## approx_count_distinct()
+
+**URL:** llms-txt#approx_count_distinct()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/max_n/max_n/ =====
+
+---
+
+## variance()
+
+**URL:** llms-txt#variance()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/gauge_agg/delta/ =====
+
+---
+
+## low()
+
+**URL:** llms-txt#low()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/candlestick/ =====
+
+---
+
+## Administrative functions
+
+**URL:** llms-txt#administrative-functions
+
+**Contents:**
+- Dump TimescaleDB meta data
+- get_telemetry_report()
+ - Sample usage
+- timescaledb_post_restore()
+ - Sample usage
+- timescaledb_pre_restore()
+ - Sample usage
+
+These administrative APIs help you prepare a database before and after a restore event. They also help you keep track of your TimescaleDB setup data.
+
+## Dump TimescaleDB meta data
+
+To help when asking for support and reporting bugs, TimescaleDB includes an SQL dump script. It outputs metadata from the internal TimescaleDB tables, along with version information.
+
+This script is available in the source distribution in `scripts/`. To use it, run:
+
+Inspect `dumpfile.txt` before sending it together with a bug report or support question.
+
+## get_telemetry_report()
+
+Returns the background [telemetry][telemetry] string sent to Tiger Data.
+
+If telemetry is turned off, it sends the string that would be sent if telemetry were enabled.
+
+View the telemetry report:
+
+## timescaledb_post_restore()
+
+Perform the required operations after you have finished restoring the database using `pg_restore`. Specifically, this resets the `timescaledb.restoring` GUC and restarts any background workers.
+
+For more information, see [Migrate using pg_dump and pg_restore].
+
+Prepare the database for normal use after a restore:
+
+## timescaledb_pre_restore()
+
+Perform the required operations so that you can restore the database using `pg_restore`. Specifically, this sets the `timescaledb.restoring` GUC to `on` and stops any background workers which could have been performing tasks.
+
+The background workers are stopped until the [timescaledb_post_restore()](#timescaledb_post_restore) function is run, after the restore operation is complete.
+
+For more information, see [Migrate using pg_dump and pg_restore].
+
+After using `timescaledb_pre_restore()`, you need to run [`timescaledb_post_restore()`](#timescaledb_post_restore) before you can use the database normally.
+
+Prepare to restore the database:
+
+===== PAGE: https://docs.tigerdata.com/api/api-tag-overview/ =====
+
+**Examples:**
+
+Example 1 (bash):
+```bash
+psql [your connect flags] -d your_timescale_db < dump_meta_data.sql > dumpfile.txt
+```
+
+Example 2 (sql):
+```sql
+SELECT get_telemetry_report();
+```
+
+Example 3 (sql):
+```sql
+SELECT timescaledb_post_restore();
+```
+
+Example 4 (sql):
+```sql
+SELECT timescaledb_pre_restore();
+```
+
+---
+
+## into_array()
+
+**URL:** llms-txt#into_array()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/max_n/into_values/ =====
+
+---
+
+## live_ranges()
+
+**URL:** llms-txt#live_ranges()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/interpolate/ =====
+
+---
+
+## num_resets()
+
+**URL:** llms-txt#num_resets()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/last_time/ =====
+
+---
+
+## uptime()
+
+**URL:** llms-txt#uptime()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/num_gaps/ =====
+
+---
+
+## API Reference
+
+**URL:** llms-txt#api-reference
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/time_delta/ =====
+
+---
+
+## saturating_mul()
+
+**URL:** llms-txt#saturating_mul()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/downsampling-intro/ =====
+
+Downsample your data to visualize trends while preserving fewer data points.
+Downsampling replaces a set of values with a much smaller set that is highly
+representative of the original data. This is particularly useful for graphing
+applications.
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/saturating_sub/ =====
+
+---
+
+## average()
+
+**URL:** llms-txt#average()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/time_weight/rollup/ =====
+
+---
+
+## downtime()
+
+**URL:** llms-txt#downtime()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/interpolated_uptime/ =====
+
+---
+
+## Create and manage jobs
+
+**URL:** llms-txt#create-and-manage-jobs
+
+**Contents:**
+- Prerequisites
+- Create a job
+- Test and debug a job
+- Alter and delete a job
+
+Jobs in TimescaleDB are custom functions or procedures that run on a schedule that you define. This page explains how to create, test, alter, and delete a job.
+
+To follow the procedure on this page you need to:
+
+* Create a [target Tiger Cloud service][create-service].
+
+This procedure also works for [self-hosted TimescaleDB][enable-timescaledb].
+
+To create a job, create a [function][postgres-createfunction] or [procedure][postgres-createprocedure] that you want your database to execute, then set it up to run on a schedule.
+
+1. **Define a function or procedure in the language of your choice**
+
+Wrap it in a `CREATE` statement:
+
+For example, to create a function that reindexes a table within your database:
+
+`job_id` and `config` are required arguments in the function signature. This returns `CREATE FUNCTION` to indicate that the function has successfully been created.
+
+1. **Call the function to validate**
+
+The result looks like this:
+
+1. **Register your job with [`add_job`][api-add_job]**
+
+Pass the name of your job, the schedule you want it to run on, and the content of your config. For the `config` value, if you don't need any special configuration parameters, set to `NULL`. For example, to run the `reindex_mytable` function every hour:
+
+The call returns a `job_id` and stores it along with `config` in the TimescaleDB catalog.
+
+The job runs on the schedule you set. You can also run it manually with [`run_job`][api-run_job] passing `job_id`. When the job runs, `job_id` and `config` are passed as arguments.
+
+1. **Validate the job**
+
+List all currently registered jobs with [`timescaledb_information.jobs`][api-timescaledb_information-jobs]:
+
+The result looks like this:
+
+## Test and debug a job
+
+To debug a job, increase the log level and run the job manually with [`run_job`][api-run_job] in the foreground. Because `run_job` is a stored procedure and not a function, run it with [`CALL`][postgres-call] instead of `SELECT`.
+
+1. **Set the minimum log level to `DEBUG1`**
+
+Replace `1000` with your `job_id`:
+
+## Alter and delete a job
+
+Alter an existing job with [`alter_job`][api-alter_job]. You can change both the config and the schedule on which the job runs.
+
+1. **Change a job's config**
+
+To replace the entire JSON config for a job, call `alter_job` with a new `config` object. For example, replace the JSON config for a job with ID `1000`:
+
+1. **Turn off job scheduling**
+
+To turn off automatic scheduling of a job, call `alter_job` and set `scheduled`to `false`. You can still run the job manually with `run_job`. For example, turn off the scheduling for a job with ID `1000`:
+
+1. **Re-enable automatic scheduling of a job**
+
+To re-enable automatic scheduling of a job, call `alter_job` and set `scheduled` to `true`. For example, re-enable scheduling for a job with ID `1000`:
+
+1. **Delete a job with [`delete_job`][api-delete_job]**
+
+For example, to delete a job with ID `1000`:
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/hyperfunctions/function-pipelines/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+CREATE FUNCTION (job_id INT DEFAULT NULL, config JSONB DEFAULT NULL)
+ RETURNS VOID
+ DECLARE
+ ;
+ BEGIN
+ ;
+ END;
+ $$ LANGUAGE ;
+```
+
+Example 2 (sql):
+```sql
+CREATE FUNCTION reindex_mytable(job_id INT DEFAULT NULL, config JSONB DEFAULT NULL)
+ RETURNS VOID
+ AS $$
+ BEGIN
+ REINDEX TABLE mytable;
+ END;
+ $$ LANGUAGE plpgsql;
+```
+
+Example 3 (sql):
+```sql
+select reindex_mytable();
+```
+
+Example 4 (sql):
+```sql
+reindex_mytable
+ -----------------
+
+ (1 row)
+```
+
+---
+
+## topn()
+
+**URL:** llms-txt#topn()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/freq_agg/intro/ =====
+
+Get the most common elements of a set and their relative frequency. The
+estimation uses the [SpaceSaving][spacingsaving-algorithm] algorithm.
+
+This group of functions contains two aggregate functions, which let you set the
+cutoff for keeping track of a value in different ways. [`freq_agg`](#freq_agg)
+allows you to specify a minimum frequency, and [`mcv_agg`](#mcv_agg) allows
+you to specify the target number of values to keep.
+
+To estimate the absolute number of times a value appears, use [`count_min_sketch`][count_min_sketch].
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/freq_agg/min_frequency/ =====
+
+---
+
+## duration_in()
+
+**URL:** llms-txt#duration_in()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/compact_state_agg/intro/ =====
+
+Given a system or value that switches between discrete states, aggregate the
+amount of time spent in each state. For example, you can use the `compact_state_agg`
+functions to track how much time a system spends in `error`, `running`, or
+`starting` states.
+
+`compact_state_agg` is designed to work with a relatively small number of states. It
+might not perform well on datasets where states are mostly distinct between
+rows.
+
+If you need to track when each state is entered and exited, use the
+[`state_agg`][state_agg] functions. If you need to track the liveness of a
+system based on a heartbeat signal, consider using the
+[`heartbeat_agg`][heartbeat_agg] functions.
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/compact_state_agg/compact_state_agg/ =====
+
+---
+
+## high()
+
+**URL:** llms-txt#high()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/high_time/ =====
+
+---
+
+## corr()
+
+**URL:** llms-txt#corr()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/idelta_right/ =====
+
+---
+
+## last_time()
+
+**URL:** llms-txt#last_time()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/counter_agg/ =====
+
+---
+
+## gp_lttb()
+
+**URL:** llms-txt#gp_lttb()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/saturating-math-intro/ =====
+
+The saturating math hyperfunctions help you perform saturating math on integers.
+In saturating math, the final result is bounded. If the result of a normal
+mathematical operation exceeds either the minimum or maximum bound, the result
+of the corresponding saturating math operation is capped at the bound. For
+example, `2 + (-3) = -1`. But in a saturating math function with a lower bound
+of `0`, such as [`saturating_add_pos`](#saturating_add_pos), the result is `0`.
+
+You can use saturating math to make sure your results don't overflow the allowed
+range of integers, or to force a result to be greater than or equal to zero.
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/lttb/ =====
+
+---
+
+## intercept()
+
+**URL:** llms-txt#intercept()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/extrapolated_rate/ =====
+
+---
+
+## min_n()
+
+**URL:** llms-txt#min_n()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/min_n/intro/ =====
+
+Get the N smallest values from a column.
+
+The `min_n()` functions give the same results as the regular SQL query `SELECT
+... ORDER BY ... LIMIT n`. But unlike the SQL query, they can be composed and
+combined like other aggregate hyperfunctions.
+
+To get the N largest values, use [`max_n()`][max_n]. To get the N smallest
+values with accompanying data, use [`min_n_by()`][min_n_by].
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/min_n/into_array/ =====
+
+---
+
+## state_timeline()
+
+**URL:** llms-txt#state_timeline()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/state_agg/interpolated_state_timeline/ =====
+
+---
+
+## mcv_agg()
+
+**URL:** llms-txt#mcv_agg()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/compact_state_agg/interpolated_duration_in/ =====
+
+---
+
+## into_values()
+
+**URL:** llms-txt#into_values()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/max_n/rollup/ =====
+
+---
+
+## heartbeat_agg()
+
+**URL:** llms-txt#heartbeat_agg()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/rollup/ =====
+
+---
+
+## saturating_add_pos()
+
+**URL:** llms-txt#saturating_add_pos()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/saturating_multiply/ =====
+
+---
+
+## rate()
+
+**URL:** llms-txt#rate()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/with_bounds/ =====
+
+---
+
+## state_at()
+
+**URL:** llms-txt#state_at()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/state_agg/interpolated_state_periods/ =====
+
+---
+
+## close_time()
+
+**URL:** llms-txt#close_time()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/close/ =====
+
+---
+
+## saturating_add()
+
+**URL:** llms-txt#saturating_add()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/asap_smooth/ =====
+
+---
+
+## freq_agg()
+
+**URL:** llms-txt#freq_agg()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/freq_agg/max_frequency/ =====
+
+---
+
+## num_live_ranges()
+
+**URL:** llms-txt#num_live_ranges()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/interpolated_downtime/ =====
+
+---
+
+## candlestick()
+
+**URL:** llms-txt#candlestick()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/volume/ =====
+
+---
+
+## first_time()
+
+**URL:** llms-txt#first_time()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/intro/ =====
+
+Analyze data whose values are designed to monotonically increase, and where any
+decreases are treated as resets. The `counter_agg` functions simplify this task,
+which can be difficult to do in pure SQL.
+
+If it's possible for your readings to decrease as well as increase, use [`gauge_agg`][gauge_agg]
+instead.
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/irate_right/ =====
+
+---
+
+## extrapolated_delta()
+
+**URL:** llms-txt#extrapolated_delta()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/interpolated_delta/ =====
+
+---
+
+## asap_smooth()
+
+**URL:** llms-txt#asap_smooth()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/saturating_sub_pos/ =====
+
+---
+
+## open_time()
+
+**URL:** llms-txt#open_time()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/vwap/ =====
+
+---
+
+## extrapolated_rate()
+
+**URL:** llms-txt#extrapolated_rate()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/rollup/ =====
+
+---
+
+## error()
+
+**URL:** llms-txt#error()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/uddsketch/rollup/ =====
+
+---
+
+## first_val()
+
+**URL:** llms-txt#first_val()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/num_resets/ =====
+
+---
+
+## interpolated_uptime()
+
+**URL:** llms-txt#interpolated_uptime()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/uptime/ =====
+
+---
+
+## interpolate()
+
+**URL:** llms-txt#interpolate()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/downtime/ =====
+
+---
+
+## delta()
+
+**URL:** llms-txt#delta()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/idelta_left/ =====
+
+---
+
+## saturating_sub_pos()
+
+**URL:** llms-txt#saturating_sub_pos()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/state_agg/timeline_agg/ =====
+
+---
+
+## approx_count()
+
+**URL:** llms-txt#approx_count()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/count_min_sketch/intro/ =====
+
+Count the number of times a value appears in a column, using the probabilistic
+[`count-min sketch`][count-min-sketch] data structure and its associated
+algorithms. For applications where a small error rate is tolerable, this can
+result in huge savings in both CPU time and memory, especially for large
+datasets.
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/count_min_sketch/count_min_sketch/ =====
+
+---
+
+## idelta_right()
+
+**URL:** llms-txt#idelta_right()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/first_val/ =====
+
+---
+
+## idelta_left()
+
+**URL:** llms-txt#idelta_left()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/first_time/ =====
+
+---
+
+## gauge_zero_time()
+
+**URL:** llms-txt#gauge_zero_time()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/gauge_agg/corr/ =====
+
+---
+
+## min_frequency()
+
+**URL:** llms-txt#min_frequency()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/freq_agg/freq_agg/ =====
+
+---
+
+## num_gaps()
+
+**URL:** llms-txt#num_gaps()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/trim_to/ =====
+
+---
+
+## Function pipelines
+
+**URL:** llms-txt#function-pipelines
+
+**Contents:**
+- Anatomy of a function pipeline
+ - Timevectors
+ - Custom operator
+ - Pipeline elements
+- Transform elements
+ - Vectorized math functions
+ - Unary mathematical functions
+ - Binary mathematical functions
+ - Compound transforms
+ - Lambda elements
+
+Function pipelines are an experimental feature, designed to radically improve
+how you write queries to analyze data in Postgres and SQL. They work by
+applying principles from functional programming and popular tools like Python
+Pandas, and PromQL.
+
+Experimental features could have bugs. They might not be backwards compatible,
+and could be removed in future releases. Use these features at your own risk, and
+do not use any experimental features in production.
+
+The `timevector()` function materializes all its data points in
+memory. This means that if you use it on a very large dataset,
+it runs out of memory. Do not use the `timevector` function
+on a large dataset, or in production.
+
+SQL is the best language for data analysis, but it is not perfect, and at times
+it can be difficult to construct the query you want. For example, this query
+gets data from the last day from the measurements table, sorts the data by the
+time column, calculates the delta between the values, takes the absolute value
+of the delta, and then takes the sum of the result of the previous steps:
+
+You can express the same query with a function pipeline like this:
+
+Function pipelines are completely SQL compliant, meaning that any tool that
+speaks SQL is able to support data analysis using function pipelines.
+
+## Anatomy of a function pipeline
+
+Function pipelines are built as a series of elements that work together to
+create your query. The most important part of a pipeline is a custom data type
+called a `timevector`. The other elements then work on the `timevector` to build
+your query, using a custom operator to define the order in which the elements
+are run.
+
+A `timevector` is a collection of time,value pairs with a defined start and end
+time, that could something like this:
+
+
+
+Your entire database might have time,value pairs that go well into the past and
+continue into the future, but the `timevector` has a defined start and end time
+within that dataset, which could look something like this:
+
+
+
+To construct a `timevector` from your data, use a custom aggregate and pass
+in the columns to become the time,value pairs. It uses a `WHERE` clause to
+define the limits of the subset, and a `GROUP BY` clause to provide identifying
+information about the time-series. For example, to construct a `timevector` from
+a dataset that contains temperatures, the SQL looks like this:
+
+Function pipelines use a single custom operator of `->`. This operator is used
+to apply and compose multiple functions. The `->` operator takes the inputs on
+the left of the operator, and applies the operation on the right of the
+operator. To put it more plainly, you can think of it as "do the next thing."
+
+A typical function pipeline could look something like this:
+
+While it might look at first glance as though `timevector(ts, val)` operation is
+an argument to `sort()`, in a pipeline these are all regular function calls.
+Each of the calls can only operate on the things in their own parentheses, and
+don't know about anything to the left of them in the statement.
+
+Each of the functions in a pipeline returns a custom type that describes the
+function and its arguments, these are all pipeline elements. The `->` operator
+performs one of two different types of actions depending on the types on its
+right and left sides:
+
+* Applies a pipeline element to the left hand argument: performing the
+ function described by the pipeline element on the incoming data type directly.
+* Compose pipeline elements into a combined element that can be applied at
+ some point in the future. This is an optimization that allows you to nest
+ elements to reduce the number of passes that are required.
+
+The operator determines the action to perform based on its left and right
+arguments.
+
+### Pipeline elements
+
+There are two main types of pipeline elements:
+
+* Transforms change the contents of the `timevector`, returning
+ the updated vector.
+* Finalizers finish the pipeline and output the resulting data.
+
+Transform elements take in a `timevector` and produce a `timevector`. They are
+the simplest element to compose, because they produce the same type.
+For example:
+
+Finalizer elements end the `timevector` portion of a pipeline. They can produce
+an output in a specified format. or they can produce an aggregate of the
+`timevector`.
+
+For example, a finalizer element that produces an output:
+
+Or a finalizer element that produces an aggregate:
+
+The third type of pipeline elements are aggregate accessors and mutators. These
+work on a `timevector` in a pipeline, but they also work in regular aggregate
+queries. An example of using these in a pipeline:
+
+## Transform elements
+
+Transform elements take a `timevector`, and produce a `timevector`.
+
+### Vectorized math functions
+
+Vectorized math function elements modify each `value` inside the `timevector`
+with the specified mathematical function. They are applied point-by-point and
+they produce a one-to-one mapping from the input to output `timevector`. Each
+point in the input has a corresponding point in the output, with its `value`
+transformed by the mathematical function specified.
+
+Elements are always applied left to right, so the order of operations is not
+taken into account even in the presence of explicit parentheses. This means for
+a `timevector` row `('2020-01-01 00:00:00+00', 20.0)`, this pipeline works:
+
+And this pipeline works in the same way:
+
+Both of these examples produce `('2020-01-01 00:00:00+00', 31.0)`.
+
+If multiple arithmetic operations are needed and precedence is important,
+consider using a [Lambda](#lambda-elements) instead.
+
+### Unary mathematical functions
+
+Unary mathematical function elements apply the corresponding mathematical
+function to each datapoint in the `timevector`, leaving the timestamp and
+ordering the same. The available elements are:
+
+|Element|Description|
+|-|-|
+|`abs()`|Computes the absolute value of each value|
+|`cbrt()`|Computes the cube root of each value|
+|`ceil()`|Computes the first integer greater than or equal to each value|
+|`floor()`|Computes the first integer less than or equal to each value|
+|`ln()`|Computes the natural logarithm of each value|
+|`log10()`|Computes the base 10 logarithm of each value|
+|`round()`|Computes the closest integer to each value|
+|`sign()`|Computes +/-1 for each positive/negative value|
+|`sqrt()`|Computes the square root for each value|
+|`trunc()`|Computes only the integer portion of each value|
+
+Even if an element logically computes an integer, `timevectors` only deal with
+double precision floating point values, so the computed value is the
+floating point representation of the integer. For example:
+
+The output for this example:
+
+### Binary mathematical functions
+
+Binary mathematical function elements run the corresponding mathematical function
+on the `value` in each point in the `timevector`, using the supplied number as
+the second argument of the function. The available elements are:
+
+|Element|Description|
+|-|-|
+|`add(N)`|Computes each value plus `N`|
+|`div(N)`|Computes each value divided by `N`|
+|`logn(N)`|Computes the logarithm base `N` of each value|
+|`mod(N)`|Computes the remainder when each number is divided by `N`|
+|`mul(N)`|Computes each value multiplied by `N`|
+|`power(N)`|Computes each value taken to the `N` power|
+|`sub(N)`|Computes each value less `N`|
+
+These elements calculate `vector -> power(2)` by squaring all of the `values`,
+and `vector -> logn(3)` gives the log-base-3 of each `value`. For example:
+
+The output for this example:
+
+### Compound transforms
+
+Mathematical transforms are applied only to the `value` in each
+point in a `timevector` and always produce one-to-one output `timevectors`.
+Compound transforms can involve both the `time` and `value` parts of the points
+in the `timevector`, and they are not necessarily one-to-one. One or more points
+in the input can be used to produce zero or more points in the output. So, where
+mathematical transforms always produce `timevectors` of the same length,
+compound transforms can produce larger or smaller `timevectors` as an output.
+
+#### Delta transforms
+
+A `delta()` transform calculates the difference between consecutive `values` in
+the `timevector`. The first point in the `timevector` is omitted as there is no
+previous value and it cannot have a `delta()`. Data should be sorted using the
+`sort()` element before passing into `delta()`. For example:
+
+The output for this example:
+
+The first row of the output is missing, as there is no way to compute a delta
+without a previous value.
+
+#### Fill method transform
+
+The `fill_to()` transform ensures that there is a point at least every
+`interval`, if there is not a point, it fills in the point using the method
+provided. The `timevector` must be sorted before calling `fill_to()`. The
+available fill methods are:
+
+|fill_method|description|
+|-|-|
+|LOCF|Last object carried forward, fill with last known value prior to the hole|
+|Interpolate|Fill the hole using a collinear point with the first known value on either side|
+|Linear|This is an alias for interpolate|
+|Nearest|Fill with the matching value from the closer of the points preceding or following the hole|
+
+The output for this example:
+
+#### Largest triangle three buckets (LTTB) transform
+
+The largest triangle three buckets (LTTB) transform uses the LTTB graphical
+downsampling algorithm to downsample a `timevector` to the specified resolution
+while maintaining visual acuity.
+
+
+
+The `sort()` transform sorts the `timevector` by time, in ascending order. This
+transform is ignored if the `timevector` is already sorted. For example:
+
+The output for this example:
+
+The Lambda element functions use the Toolkit's experimental Lambda syntax to transform
+a `timevector`. A Lambda is an expression that is applied to the elements of a `timevector`.
+It is written as a string, usually `$$`-quoted, containing the expression to run.
+For example:
+
+A Lambda expression can be constructed using these components:
+
+* **Variable declarations** such as `let $foo = 3; $foo * $foo`. Variable
+ declarations end with a semicolon. All Lambdas must end with an
+ expression, this does not have a semicolon. Multiple variable declarations
+ can follow one another, for example:
+ `let $foo = 3; let $bar = $foo * $foo; $bar * 10`
+* **Variable names** such as `$foo`. They must start with a `$` symbol. The
+ variables `$time` and `$value` are reserved; they refer to the time and
+ value of the point in the vector the Lambda expression is being called on.
+* **Function calls** such as `abs($foo)`. Most mathematical functions are
+ supported.
+* **Binary operations** containing the arithmetic binary operators `and`,
+ `or`, `=`, `!=`, `<`, `<=`, `>`, `>=`, `^`, `*`, `/`, `+`, and `-` are
+ supported.
+* **Interval literals** are expressed with a trailing `i`. For example,
+ `'1 day'i`. Except for the trailing `i`, these follow the Postgres
+ `INTERVAL` input format.
+* **Time literals** such as `'2021-01-02 03:00:00't` expressed with a
+ trailing `t`. Except for the trailing `t` these follow the Postgres
+ `TIMESTAMPTZ` input format.
+* **Number literals** such as `42`, `0.0`, `-7`, or `1e2`.
+
+Lambdas follow a grammar that is roughly equivalent to EBNF. For example:
+
+The `map()` Lambda maps each element of the `timevector`. This Lambda must
+return either a `DOUBLE PRECISION`, where only the values of each point in the
+`timevector` is altered, or a `(TIMESTAMPTZ, DOUBLE PRECISION)`, where both the
+times and values are changed. An example of the `map()` Lambda with a
+`DOUBLE PRECISION` return:
+
+The output for this example:
+
+An example of the `map()` Lambda with a `(TIMESTAMPTZ, DOUBLE PRECISION)`
+return:
+
+The output for this example:
+
+The `filter()` Lambda filters a `timevector` based on a Lambda expression that
+returns `true` for every point that should stay in the `timevector` timeseries,
+and `false` for every point that should be removed. For example:
+
+The output for this example:
+
+## Finalizer elements
+
+Finalizer elements complete the function pipeline, and output a value or an
+aggregate.
+
+You can finalize a pipeline with a `timevector` output element. These are used
+at the end of a pipeline to return a `timevector`. This can be useful if you
+need to use them in another pipeline later on. The two types of output are:
+
+* `unnest()`, which returns a set of `(TimestampTZ, DOUBLE PRECISION)` pairs.
+* `materialize()`, which forces the pipeline to materialize a `timevector`.
+ This blocks any optimizations that lazily materialize a `timevector`.
+
+### Aggregate output elements
+
+These elements take a `timevector` and run the corresponding aggregate over it
+to produce a result.. The possible elements are:
+
+* `average()`
+* `integral()`
+* `counter_agg()`
+* `hyperloglog()`
+* `stats_agg()`
+* `sum()`
+* `num_vals()`
+
+An example of an aggregate output using `num_vals()`:
+
+The output for this example:
+
+An example of an aggregate output using `stats_agg()`:
+
+The output for this example:
+
+## Aggregate accessors and mutators
+
+Aggregate accessors and mutators work in function pipelines in the same way as
+they do in other aggregates. You can use them to get a value from the aggregate
+part of a function pipeline. For example:
+
+When you use them in a pipeline instead of standard function accessors and
+mutators, they can make the syntax clearer by getting rid of nested functions.
+For example, the nested syntax looks like this:
+
+Using a function pipeline with the `->` operator instead looks like this:
+
+### Counter aggregates
+
+Counter aggregates handle resetting counters. Counters are a common type of
+metric in application performance monitoring and metrics. All values have resets
+accounted for. These elements must have a `CounterSummary` to their left when
+used in a pipeline, from a `counter_agg()` aggregate or pipeline element. The
+available counter aggregate functions are:
+
+|Element|Description|
+|-|-|
+|`counter_zero_time()`|The time at which the counter value is predicted to have been zero based on the least squares fit of the points input to the `CounterSummary`(x intercept)|
+|`corr()`|The correlation coefficient of the least squares fit line of the adjusted counter value|
+|`delta()`|Computes the last - first value of the counter|
+|`extrapolated_delta(method)`|Computes the delta extrapolated using the provided method to bounds of range. Bounds must have been provided in the aggregate or a `with_bounds` call.|
+|`idelta_left()`/`idelta_right()`|Computes the instantaneous difference between the second and first points (left) or last and next-to-last points (right)|
+|`intercept()`|The y-intercept of the least squares fit line of the adjusted counter value|
+|`irate_left()`/`irate_right()`|Computes the instantaneous rate of change between the second and first points (left) or last and next-to-last points (right)|
+|`num_changes()`|Number of times the counter changed values|
+|`num_elements()`|Number of items - any with the exact same time have been counted only once|
+|`num_changes()`|Number of times the counter reset|
+|`slope()`|The slope of the least squares fit line of the adjusted counter value|
+|`with_bounds(range)`|Applies bounds using the `range` (a `TSTZRANGE`) to the `CounterSummary` if they weren't provided in the aggregation step|
+
+### Percentile approximation
+
+Percentile approximation aggregate accessors are used to approximate
+percentiles. Currently, only accessors are implemented for `percentile_agg` and
+`uddsketch` based aggregates. We have not yet implemented the pipeline aggregate
+for percentile approximation with `tdigest`.
+
+|Element|Description|
+|---|---|
+|`approx_percentile(p)`| The approximate value at percentile `p` |
+|`approx_percentile_rank(v)`|The approximate percentile a value `v` would fall in|
+|`error()`|The maximum relative error guaranteed by the approximation|
+|`mean()`| The exact average of the input values.|
+|`num_vals()`| The number of input values|
+
+### Statistical aggregates
+
+Statistical aggregate accessors add support for common statistical aggregates.
+These allow you to compute and `rollup()` common statistical aggregates like
+`average` and `stddev`, more advanced aggregates like `skewness`, and
+two-dimensional aggregates like `slope` and `covariance`. Because there are
+both single-dimensional and two-dimensional versions of these, the accessors can
+have multiple forms. For example, `average()` calculates the average on a
+single-dimension aggregate, while `average_y()` and `average_x()` calculate the
+average on each of two dimensions. The available statistical aggregates are:
+
+|Element|Description|
+|-|-|
+|`average()/average_y()/average_x()`|The average of the values|
+|`corr()`|The correlation coefficient of the least squares fit line|
+|`covariance(method)`|The covariance of the values using either `population` or `sample` method|
+| `determination_coeff()`|The determination coefficient (or R squared) of the values|
+|`kurtosis(method)/kurtosis_y(method)/kurtosis_x(method)`|The kurtosis (fourth moment) of the values using either the `population` or `sample` method|
+|`intercept()`|The intercept of the least squares fit line|
+|`num_vals()`|The number of values seen|
+|`skewness(method)/skewness_y(method)/skewness_x(method)`|The skewness (third moment) of the values using either the `population` or `sample` method|
+|`slope()`|The slope of the least squares fit line|
+|`stddev(method)/stddev_y(method)/stddev_x(method)`|The standard deviation of the values using either the `population` or `sample` method|
+|`sum()`|The sum of the values|
+|`variance(method)/variance_y(method)/variance_x(method)`|The variance of the values using either the `population` or `sample` method|
+|`x_intercept()`|The x intercept of the least squares fit line|
+
+### Time-weighted averages aggregates
+
+The `average()` accessor can be called on the output of a `time_weight()`. For
+example:
+
+### Approximate count distinct aggregates
+
+This is an approximation for distinct counts. The `distinct_count()` accessor
+can be called on the output of a `hyperloglog()`. For example:
+
+## Formatting timevectors
+
+You can turn a timevector into a formatted text representation. There are two
+functions for turning a timevector to text:
+
+* [`to_text`](#to-text), which allows you to specify the template
+* [`to_plotly`](#to-plotly), which outputs a format suitable for use with the
+ [Plotly JSON chart schema][plotly]
+
+This function produces a text representation, formatted according to the
+`format_string`. The format string can use any valid Tera template
+syntax, and it can include any of the built-in variables:
+
+* `TIMES`: All the times in the timevector, as an array
+* `VALUES`: All the values in the timevector, as an array
+* `TIMEVALS`: All the time-value pairs in the timevector, formatted as
+ `{"time": $TIME, "val": $VAL}`, as an array
+
+For example, given this table of data:
+
+You can use a format string with `TIMEVALS` to produce the following text:
+
+Or you can use a format string with `TIMES` and `VALUES` to produce the
+following text:
+
+This function produces a text representation, formatted for use with Plotly.
+
+For example, given this table of data:
+
+You can produce the following Plotly-compatible text:
+
+## All function pipeline elements
+
+This table lists all function pipeline elements in alphabetical order:
+
+|Element|Category|Output|
+|-|-|-|
+|`abs()`|Unary Mathematical|`timevector` pipeline|
+|`add(val DOUBLE PRECISION)`|Binary Mathematical|`timevector` pipeline|
+|`average()`|Aggregate Finalizer|DOUBLE PRECISION|
+|`cbrt()`|Unary Mathematical| `timevector` pipeline|
+|`ceil()`|Unary Mathematical| `timevector` pipeline|
+|`counter_agg()`|Aggregate Finalizer| `CounterAgg`|
+|`delta()`|Compound|`timevector` pipeline|
+|`div`|Binary Mathematical|`timevector` pipeline|
+|`fill_to`|Compound|`timevector` pipeline|
+|`filter`|Lambda|`timevector` pipeline|
+|`floor`|Unary Mathematical|`timevector` pipeline|
+|`hyperloglog`|Aggregate Finalizer|HyperLogLog|
+|`ln`|Unary Mathematical|`timevector` pipeline|
+|`log10`|Unary Mathematical|`timevector` pipeline|
+|`logn`|Binary Mathematical|`timevector` pipeline|
+|`lttb`|Compound|`timevector` pipeline|
+|`map`|Lambda|`timevector` pipeline|
+|`materialize`|Output|`timevector` pipeline|
+|`mod`|Binary Mathematical|`timevector` pipeline|
+|`mul`|Binary Mathematical|`timevector` pipeline|
+|`num_vals`|Aggregate Finalizer|BIGINT|
+|`power`|Binary Mathematical|`timevector` pipeline|
+|`round`|Unary Mathematical|`timevector` pipeline|
+|`sign`|Unary Mathematical|`timevector` pipeline|
+|`sort`|Compound|`timevector` pipeline|
+|`sqrt`|Unary Mathematical|`timevector` pipeline|
+|`stats_agg`|Aggregate Finalizer|StatsSummary1D|
+|`sub`|Binary Mathematical|`timevector` pipeline|
+|`sum`|Aggregate Finalizer|`timevector` pipeline|
+|`trunc`|Unary Mathematical|`timevector` pipeline|
+|`unnest`|Output|`TABLE (time TIMESTAMPTZ, value DOUBLE PRECISION)`|
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/hyperfunctions/time-weighted-averages/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+SELECT device id,
+sum(abs_delta) as volatility
+FROM (
+ SELECT device_id,
+abs(val - lag(val) OVER last_day) as abs_delta
+FROM measurements
+WHERE ts >= now()-'1 day'::interval) calc_delta
+GROUP BY device_id;
+```
+
+Example 2 (sql):
+```sql
+SELECT device_id,
+ toolkit_experimental.timevector(ts, val)
+ -> toolkit_experimental.sort()
+ -> toolkit_experimental.delta()
+ -> toolkit_experimental.abs()
+ -> toolkit_experimental.sum() as volatility
+FROM measurements
+WHERE ts >= now()-'1 day'::interval
+GROUP BY device_id;
+```
+
+Example 3 (sql):
+```sql
+SELECT device_id,
+ toolkit_experimental.timevector(ts, val)
+FROM measurements
+WHERE ts >= now() - '1 day'::interval
+GROUP BY device_id;
+```
+
+Example 4 (sql):
+```sql
+SELECT device_id,
+ toolkit_experimental.timevector(ts, val)
+ -> toolkit_experimental.sort()
+ -> toolkit_experimental.delta()
+ -> toolkit_experimental.abs()
+ -> toolkit_experimental.sum() as volatility
+FROM measurements
+WHERE ts >= now() - '1 day'::interval
+GROUP BY device_id;
+```
+
+---
+
+## low_time()
+
+**URL:** llms-txt#low_time()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/intro/ =====
+
+Perform analysis of financial asset data. These specialized hyperfunctions make
+it easier to write financial analysis queries that involve candlestick data.
+
+They help you answer questions such as:
+
+* What are the opening and closing prices of these stocks?
+* When did the highest price occur for this stock?
+
+This function group uses the [two-step aggregation][two-step-aggregation]
+pattern. In addition to the usual aggregate function,
+[`candlestick_agg`][candlestick_agg], it also includes the pseudo-aggregate
+function `candlestick`. `candlestick_agg` produces a candlestick aggregate from
+raw tick data, which can then be used with the accessor and rollup functions in
+this group. `candlestick` takes pre-aggregated data and transforms it into the
+same format that `candlestick_agg` produces. This allows you to use the
+accessors and rollups with existing candlestick data.
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/close_time/ =====
+
+---
+
+## interpolated_state_periods()
+
+**URL:** llms-txt#interpolated_state_periods()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/state_agg/state_periods/ =====
+
+---
+
+## Time-weighted average functions
+
+**URL:** llms-txt#time-weighted-average-functions
+
+This section contains functions related to time-weighted averages and integrals.
+Time weighted averages and integrals are commonly used in cases where a time
+series is not evenly sampled, so a traditional average gives misleading results.
+For more information about these functions, see the
+[hyperfunctions documentation][hyperfunctions-time-weight-average].
+
+Some hyperfunctions are included in the default TimescaleDB product. For
+additional hyperfunctions, you need to install the
+[TimescaleDB Toolkit][install-toolkit] Postgres extension.
+
+
+
+===== PAGE: https://docs.tigerdata.com/api/counter_aggs/ =====
+
+---
+
+## dead_ranges()
+
+**URL:** llms-txt#dead_ranges()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/live_at/ =====
+
+---
+
+## time_weight()
+
+**URL:** llms-txt#time_weight()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/time_weight/integral/ =====
+
+---
+
+## interpolated_integral()
+
+**URL:** llms-txt#interpolated_integral()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/time_weight/first_time/ =====
+
+---
+
+## interpolated_rate()
+
+**URL:** llms-txt#interpolated_rate()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/intercept/ =====
+
+---
+
+## uuid_version()
+
+**URL:** llms-txt#uuid_version()
+
+**Contents:**
+- Samples
+- Arguments
+
+Extract the version number from a UUID object:
+
+
+
+Returns something like:
+
+| Name | Type | Default | Required | Description |
+|-|------------------|-|----------|----------------------------------------------------|
+|`uuid`|UUID| - | ✔ | The UUID object to extract the version number from |
+
+===== PAGE: https://docs.tigerdata.com/api/uuid-functions/generate_uuidv7/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+postgres=# SELECT uuid_version('019913ce-f124-7835-96c7-a2df691caa98');
+```
+
+Example 2 (terminaloutput):
+```terminaloutput
+uuid_version
+--------------
+ 7
+```
+
+---
+
+## last_val()
+
+**URL:** llms-txt#last_val()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/extrapolated_delta/ =====
+
+---
+
+## count_min_sketch()
+
+**URL:** llms-txt#count_min_sketch()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/freq_agg/topn/ =====
+
+---
+
+## candlestick_agg()
+
+**URL:** llms-txt#candlestick_agg()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/candlestick_agg/low_time/ =====
+
+---
+
+## locf()
+
+**URL:** llms-txt#locf()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/tdigest/tdigest/ =====
+
+---
+
+## interpolated_duration_in()
+
+**URL:** llms-txt#interpolated_duration_in()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/compact_state_agg/duration_in/ =====
+
+---
+
+## integral()
+
+**URL:** llms-txt#integral()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/time_weight/last_time/ =====
+
+---
+
+## README
+
+**URL:** llms-txt#readme
+
+**Contents:**
+- Bulk editing for API frontmatter
+ - `extract_excerpts.sh`
+ - `insert_excerpts.sh`
+
+This directory includes helper scripts for writing and editing docs content. It
+doesn't include scripts for building content; those are in the web-documentation
+repo.
+
+## Bulk editing for API frontmatter
+API frontmatter metadata is stored with the API content it describes. This makes
+sense in most cases, but sometimes you want to bulk edit metadata or compare
+phrasing across all API references. There are 2 scripts to help with this. They
+are currently written to edit the `excerpts` field, but can be adapted for other
+fields.
+
+### `extract_excerpts.sh`
+This extracts the excerpt from every API reference into a single file named
+`extracted_excerpts.md`.
+
+To use:
+1. `cd` into the `_scripts/` directory.
+1. If you already have an `extracted_excerpts.md` file from a previous run,
+ delete it.
+1. Run `./extract_excerpts.sh`.
+1. Open `extracted_excerpts.md` and edit the excerpts directly within the file.
+ Only change the actual excerpts, not the filename or `excerpt: ` label.
+ Otherwise, the next script fails.
+
+### `insert_excerpts.sh`
+This takes the edited excerpts from `extracted_excerpts.md` and updates the
+original files with the new edits. A backup is created so the data is saved if
+something goes horribly wrong. (If something goes wrong with the backup, you can
+always also restore from git.)
+
+To use:
+1. Save your edited `extracted_excerpts.md`.
+1. Make sure you are in the `_scripts/` directory.
+1. Run `./insert_excerpts.sh`.
+1. Run `git diff` to double-check that the update worked correctly.
+1. Delete the unnecessary backups.
+
+===== PAGE: https://docs.tigerdata.com/navigation/index/ =====
+
+---
+
+## distinct_count()
+
+**URL:** llms-txt#distinct_count()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/hyperloglog/hyperloglog/ =====
+
+---
+
+## time_delta()
+
+**URL:** llms-txt#time_delta()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/slope/ =====
+
+---
+
+## Jobs
+
+**URL:** llms-txt#jobs
+
+Jobs allow you to run functions and procedures implemented in a
+language of your choice on a schedule within Timescale. This allows
+automatic periodic tasks that are not covered by existing policies and
+even enhancing existing policies with additional functionality.
+
+The following APIs and views allow you to manage the jobs that you create and
+get details around automatic jobs used by other TimescaleDB functions like
+continuous aggregation refresh policies and data retention policies. To view the
+policies that you set or the policies that already exist, see
+[informational views][informational-views].
+
+===== PAGE: https://docs.tigerdata.com/api/uuid-functions/ =====
+
+---
+
+## API reference tag overview
+
+**URL:** llms-txt#api-reference-tag-overview
+
+**Contents:**
+- Community Community
+- Experimental (TimescaleDB Experimental Schema) Experimental
+- Toolkit Toolkit
+- Experimental (TimescaleDB Toolkit) Experimental
+
+The TimescaleDB API Reference uses tags to categorize functions. The tags are
+`Community`, `Experimental`, `Toolkit`, and `Experimental (Toolkit)`. This
+section explains each tag.
+
+## Community Community
+
+This tag indicates that the function is available under TimescaleDB Community
+Edition, and are not available under the Apache 2 Edition. For more information,
+visit our [TimescaleDB License comparison sheet][tsl-comparison].
+
+## Experimental (TimescaleDB Experimental Schema) Experimental
+
+This tag indicates that the function is included in the TimescaleDB experimental
+schema. Do not use experimental functions in production. Experimental features
+could include bugs, and are likely to change in future versions. The
+experimental schema is used by TimescaleDB to develop new features more quickly.
+If experimental functions are successful, they can move out of the experimental
+schema and go into production use.
+
+When you upgrade the `timescaledb` extension, the experimental schema is removed
+by default. To use experimental features after an upgrade, you need to add the
+experimental schema again.
+
+For more information about the experimental
+schema, [read the Tiger Data blog post][experimental-blog].
+
+This tag indicates that the function is included in the TimescaleDB Toolkit extension.
+Toolkit functions are available under TimescaleDB Community Edition.
+For installation instructions, [see the installation guide][toolkit-install].
+
+## Experimental (TimescaleDB Toolkit) Experimental
+
+This tag is used with the Toolkit tag. It indicates a Toolkit function that is
+under active development. Do not use experimental toolkit functions in
+production. Experimental toolkit functions could include bugs, and are likely to
+change in future versions.
+
+These functions might not correctly handle unusual use cases or errors, and they
+could have poor performance. Updates to the TimescaleDB extension drop database
+objects that depend on experimental features like this function. If you use
+experimental toolkit functions on Timescale, this function is
+automatically dropped when the Toolkit extension is updated. For more
+information, [see the TimescaleDB Toolkit docs][toolkit-docs].
+
+===== PAGE: https://docs.tigerdata.com/api/api-reference/ =====
+
+---
+
+## saturating_sub()
+
+**URL:** llms-txt#saturating_sub()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/gp_lttb/ =====
+
+---
+
+## Using REST API in Managed Service for TimescaleDB
+
+**URL:** llms-txt#using-rest-api-in-managed-service-for-timescaledb
+
+**Contents:**
+ - Using cURL to get your details
+
+Managed Service for TimescaleDB has an API for integration and automation tasks.
+For information about using the endpoints, see the [API Documentation][aiven-api].
+MST offers an HTTP API with token authentication and JSON-formatted data. You
+can use the API for all the tasks that can be performed using the MST Console.
+To get started you need to first create an authentication token, and then use
+the token in the header to use the API endpoints.
+
+1. In [Managed Service for TimescaleDB][mst-login], click `User Information` in the top right corner.
+1. In the `User Profile` page, navigate to the `Authentication`tab.
+1. Click `Generate Token`.
+1. In the `Generate access token` dialog, type a descriptive name for the
+ token and leave the rest of the fields blank.
+1. Copy the generated authentication token and save it.
+
+### Using cURL to get your details
+
+1. Set the environment variable `MST_API_TOKEN` with the access token that you generate:
+
+1. To get the details about the current user session using the `/me` endpoint:
+
+The output looks similar to this:
+
+===== PAGE: https://docs.tigerdata.com/mst/identify-index-issues/ =====
+
+**Examples:**
+
+Example 1 (bash):
+```bash
+export MST_API_TOKEN="access token"
+```
+
+Example 2 (bash):
+```bash
+curl -s -H "Authorization: aivenv1 $MST_API_TOKEN" https://api.aiven.io/v1/me|json_pp
+```
+
+Example 3 (bash):
+```bash
+{
+ "user": {
+ "auth": [],
+ "create_time": "string",
+ "features": { },
+ "intercom": {},
+ "invitations": [],
+ "project_membership": {},
+ "project_memberships": {},
+ "projects": [],
+ "real_name": "string",
+ "state": "string",
+ "token_validity_begin": "string",
+ "user": "string",
+ "user_id": "string"
+ }
+ }
+```
+
+---
+
+## num_changes()
+
+**URL:** llms-txt#num_changes()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/interpolated_rate/ =====
+
+---
+
+## counter_agg()
+
+**URL:** llms-txt#counter_agg()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/counter_agg/rate/ =====
+
+---
+
+## live_at()
+
+**URL:** llms-txt#live_at()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/heartbeat_agg/heartbeat_agg/ =====
+
+---
+
+## max_frequency()
+
+**URL:** llms-txt#max_frequency()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/freq_agg/into_values/ =====
+
+---
+
+## hyperloglog()
+
+**URL:** llms-txt#hyperloglog()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/hyperloglog/rollup/ =====
+
+---
+
+## gauge_agg()
+
+**URL:** llms-txt#gauge_agg()
+
+===== PAGE: https://docs.tigerdata.com/api/_hyperfunctions/gauge_agg/rate/ =====
+
+---
diff --git a/skills/timescaledb/references/compression.md b/skills/timescaledb/references/compression.md
new file mode 100644
index 0000000..b618ac0
--- /dev/null
+++ b/skills/timescaledb/references/compression.md
@@ -0,0 +1,3226 @@
+# Timescaledb - Compression
+
+**Pages:** 19
+
+---
+
+## Inserting or modifying data in the columnstore
+
+**URL:** llms-txt#inserting-or-modifying-data-in-the-columnstore
+
+**Contents:**
+- Earlier versions of TimescaleDB (before v2.11.0)
+
+In TimescaleDB [v2.11.0][tsdb-release-2-11-0] and later, you can use the `UPDATE` and `DELETE`
+commands to modify existing rows in compressed chunks. This works in a similar
+way to `INSERT` operations. To reduce the amount of decompression, TimescaleDB only attempts to decompress data where it is necessary.
+However, if there are no qualifiers, or if the qualifiers cannot be used as filters, calls to `UPDATE` and `DELETE` may convert large amounts of data to the rowstore and back to the columnstore.
+To avoid large scale conversion, filter on the columns you use to `segementby` and `orderby`. This filters as much data as possible before any data is modified, and reduces the amount of data conversions.
+
+DML operations on the columnstore work if the data you are inserting has
+unique constraints. Constraints are preserved during the insert operation.
+TimescaleDB uses a Postgres function that decompresses relevant data during the insert
+to check if the new data breaks unique checks. This means that any time you insert data
+into the columnstore, a small amount of data is decompressed to allow a
+speculative insertion, and block any inserts which could violate constraints.
+
+For TimescaleDB [v2.17.0][tsdb-release-2-17-0] and later, delete performance is improved on compressed
+hypertables when a large amount of data is affected. When you delete whole segments of
+data, filter your deletes by `segmentby` column(s) instead of separate deletes.
+This considerably increases performance by skipping the decompression step.
+Since TimescaleDB [v2.21.0][tsdb-release-2-21-0] and later, `DELETE` operations on the columnstore
+are executed on the batch level, which allows more performant deletion of data of non-segmentby columns
+and reduces IO usage.
+
+## Earlier versions of TimescaleDB (before v2.11.0)
+
+This feature requires Postgres 14 or later
+
+From TimescaleDB v2.3.0, you can insert data into compressed chunks with some
+limitations. The primary limitation is that you can't insert data with unique
+constraints. Additionally, newly inserted data needs to be compressed at the
+same time as the data in the chunk, either by a running recompression policy, or
+by using `recompress_chunk` manually on the chunk.
+
+In TimescaleDB v2.2.0 and earlier, you cannot insert data into compressed chunks.
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/jobs/create-and-manage-jobs/ =====
+
+---
+
+## timescaledb_information.jobs
+
+**URL:** llms-txt#timescaledb_information.jobs
+
+**Contents:**
+- Samples
+- Arguments
+
+Shows information about all jobs registered with the automation framework.
+
+Shows a job associated with the refresh policy for continuous aggregates:
+
+Find all jobs related to compression policies (before TimescaleDB v2.20):
+
+Find all jobs related to columnstore policies (TimescaleDB v2.20 and later):
+
+|Name|Type| Description |
+|-|-|--------------------------------------------------------------------------------------------------------------|
+|`job_id`|`INTEGER`| The ID of the background job |
+|`application_name`|`TEXT`| Name of the policy or job |
+|`schedule_interval`|`INTERVAL`| The interval at which the job runs. Defaults to 24 hours |
+|`max_runtime`|`INTERVAL`| The maximum amount of time the job is allowed to run by the background worker scheduler before it is stopped |
+|`max_retries`|`INTEGER`| The number of times the job is retried if it fails |
+|`retry_period`|`INTERVAL`| The amount of time the scheduler waits between retries of the job on failure |
+|`proc_schema`|`TEXT`| Schema name of the function or procedure executed by the job |
+|`proc_name`|`TEXT`| Name of the function or procedure executed by the job |
+|`owner`|`TEXT`| Owner of the job |
+|`scheduled`|`BOOLEAN`| Set to `true` to run the job automatically |
+|`fixed_schedule`|BOOLEAN| Set to `true` for jobs executing at fixed times according to a schedule interval and initial start |
+|`config`|`JSONB`| Configuration passed to the function specified by `proc_name` at execution time |
+|`next_start`|`TIMESTAMP WITH TIME ZONE`| Next start time for the job, if it is scheduled to run automatically |
+|`initial_start`|`TIMESTAMP WITH TIME ZONE`| Time the job is first run and also the time on which execution times are aligned for jobs with fixed schedules |
+|`hypertable_schema`|`TEXT`| Schema name of the hypertable. Set to `NULL` for a job |
+|`hypertable_name`|`TEXT`| Table name of the hypertable. Set to `NULL` for a job |
+|`check_schema`|`TEXT`| Schema name of the optional configuration validation function, set when the job is created or updated |
+|`check_name`|`TEXT`| Name of the optional configuration validation function, set when the job is created or updated |
+
+===== PAGE: https://docs.tigerdata.com/api/informational-views/hypertables/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+SELECT * FROM timescaledb_information.jobs;
+job_id | 1001
+application_name | Refresh Continuous Aggregate Policy [1001]
+schedule_interval | 01:00:00
+max_runtime | 00:00:00
+max_retries | -1
+retry_period | 01:00:00
+proc_schema | _timescaledb_internal
+proc_name | policy_refresh_continuous_aggregate
+owner | postgres
+scheduled | t
+config | {"start_offset": "20 days", "end_offset": "10
+days", "mat_hypertable_id": 2}
+next_start | 2020-10-02 12:38:07.014042-04
+hypertable_schema | _timescaledb_internal
+hypertable_name | _materialized_hypertable_2
+check_schema | _timescaledb_internal
+check_name | policy_refresh_continuous_aggregate_check
+```
+
+Example 2 (sql):
+```sql
+SELECT * FROM timescaledb_information.jobs where application_name like 'Compression%';
+-[ RECORD 1 ]-----+--------------------------------------------------
+job_id | 1002
+application_name | Compression Policy [1002]
+schedule_interval | 15 days 12:00:00
+max_runtime | 00:00:00
+max_retries | -1
+retry_period | 01:00:00
+proc_schema | _timescaledb_internal
+proc_name | policy_compression
+owner | postgres
+scheduled | t
+config | {"hypertable_id": 3, "compress_after": "60 days"}
+next_start | 2020-10-18 01:31:40.493764-04
+hypertable_schema | public
+hypertable_name | conditions
+check_schema | _timescaledb_internal
+check_name | policy_compression_check
+```
+
+Example 3 (sql):
+```sql
+SELECT * FROM timescaledb_information.jobs where application_name like 'Columnstore%';
+-[ RECORD 1 ]-----+--------------------------------------------------
+job_id | 1002
+application_name | Columnstore Policy [1002]
+schedule_interval | 15 days 12:00:00
+max_runtime | 00:00:00
+max_retries | -1
+retry_period | 01:00:00
+proc_schema | _timescaledb_internal
+proc_name | policy_compression
+owner | postgres
+scheduled | t
+config | {"hypertable_id": 3, "compress_after": "60 days"}
+next_start | 2025-10-18 01:31:40.493764-04
+hypertable_schema | public
+hypertable_name | conditions
+check_schema | _timescaledb_internal
+check_name | policy_compression_check
+```
+
+Example 4 (sql):
+```sql
+SELECT * FROM timescaledb_information.jobs where application_name like 'User-Define%';
+-[ RECORD 1 ]-----+------------------------------
+job_id | 1003
+application_name | User-Defined Action [1003]
+schedule_interval | 01:00:00
+max_runtime | 00:00:00
+max_retries | -1
+retry_period | 00:05:00
+proc_schema | public
+proc_name | custom_aggregation_func
+owner | postgres
+scheduled | t
+config | {"type": "function"}
+next_start | 2020-10-02 14:45:33.339885-04
+hypertable_schema |
+hypertable_name |
+check_schema | NULL
+check_name | NULL
+-[ RECORD 2 ]-----+------------------------------
+job_id | 1004
+application_name | User-Defined Action [1004]
+schedule_interval | 01:00:00
+max_runtime | 00:00:00
+max_retries | -1
+retry_period | 00:05:00
+proc_schema | public
+proc_name | custom_retention_func
+owner | postgres
+scheduled | t
+config | {"type": "function"}
+next_start | 2020-10-02 14:45:33.353733-04
+hypertable_schema |
+hypertable_name |
+check_schema | NULL
+check_name | NULL
+```
+
+---
+
+## Low compression rate
+
+**URL:** llms-txt#low-compression-rate
+
+
+
+Low compression rates are often caused by [high cardinality][cardinality-blog] of the segment key. This means that the column you selected for grouping the rows during compression has too many unique values. This makes it impossible to group a lot of rows in a batch. To achieve better compression results, choose a segment key with lower cardinality.
+
+===== PAGE: https://docs.tigerdata.com/_troubleshooting/dropping-chunks-times-out/ =====
+
+---
+
+## Query time-series data tutorial - set up compression
+
+**URL:** llms-txt#query-time-series-data-tutorial---set-up-compression
+
+**Contents:**
+- Compression setup
+- Add a compression policy
+- Taking advantage of query speedups
+
+You have now seen how to create a hypertable for your NYC taxi trip
+data and query it. When ingesting a dataset like this
+is seldom necessary to update old data and over time the amount of
+data in the tables grows. Over time you end up with a lot of data and
+since this is mostly immutable you can compress it to save space and
+avoid incurring additional cost.
+
+It is possible to use disk-oriented compression like the support
+offered by ZFS and Btrfs but since TimescaleDB is build for handling
+event-oriented data (such as time-series) it comes with support for
+compressing data in hypertables.
+
+TimescaleDB compression allows you to store the data in a vastly more
+efficient format allowing up to 20x compression ratio compared to a
+normal Postgres table, but this is of course highly dependent on the
+data and configuration.
+
+TimescaleDB compression is implemented natively in Postgres and does
+not require special storage formats. Instead it relies on features of
+Postgres to transform the data into columnar format before
+compression. The use of a columnar format allows better compression
+ratio since similar data is stored adjacently. For more details on how
+the compression format looks, you can look at the [compression
+design][compression-design] section.
+
+A beneficial side-effect of compressing data is that certain queries
+are significantly faster since less data has to be read into
+memory.
+
+1. Connect to the Tiger Cloud service that contains the
+ dataset using, for example `psql`.
+1. Enable compression on the table and pick suitable segment-by and
+ order-by column using the `ALTER TABLE` command:
+
+Depending on the choice if segment-by and order-by column you can
+ get very different performance and compression ratio. To learn
+ more about how to pick the correct columns, see
+ [here][segment-by-columns].
+1. You can manually compress all the chunks of the hypertable using
+ `compress_chunk` in this manner:
+
+ You can also [automate compression][automatic-compression] by
+ adding a [compression policy][add_compression_policy] which will
+ be covered below.
+1. Now that you have compressed the table you can compare the size of
+ the dataset before and after compression:
+
+ This shows a significant improvement in data usage:
+
+## Add a compression policy
+
+To avoid running the compression step each time you have some data to
+compress you can set up a compression policy. The compression policy
+allows you to compress data that is older than a particular age, for
+example, to compress all chunks that are older than 8 days:
+
+Compression policies run on a regular schedule, by default once every
+day, which means that you might have up to 9 days of uncompressed data
+with the setting above.
+
+You can find more information on compression policies in the
+[add_compression_policy][add_compression_policy] section.
+
+## Taking advantage of query speedups
+
+Previously, compression was set up to be segmented by `vendor_id` column value.
+This means fetching data by filtering or grouping on that column will be
+more efficient. Ordering is also set to time descending so if you run queries
+which try to order data with that ordering, you should see performance benefits.
+
+For instance, if you run the query example from previous section:
+
+You should see a decent performance difference when the dataset is compressed and
+when is decompressed. Try it yourself by running the previous query, decompressing
+the dataset and running it again while timing the execution time. You can enable
+timing query times in psql by running:
+
+To decompress the whole dataset, run:
+
+On an example setup, speedup performance observed was pretty significant,
+700 ms when compressed vs 1,2 sec when decompressed.
+
+Try it yourself and see what you get!
+
+===== PAGE: https://docs.tigerdata.com/tutorials/blockchain-query/blockchain-compress/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+ALTER TABLE rides
+ SET (
+ timescaledb.compress,
+ timescaledb.compress_segmentby='vendor_id',
+ timescaledb.compress_orderby='pickup_datetime DESC'
+ );
+```
+
+Example 2 (sql):
+```sql
+SELECT compress_chunk(c) from show_chunks('rides') c;
+```
+
+Example 3 (sql):
+```sql
+SELECT
+ pg_size_pretty(before_compression_total_bytes) as before,
+ pg_size_pretty(after_compression_total_bytes) as after
+ FROM hypertable_compression_stats('rides');
+```
+
+Example 4 (sql):
+```sql
+before | after
+ ---------+--------
+ 1741 MB | 603 MB
+```
+
+---
+
+## add_policies()
+
+**URL:** llms-txt#add_policies()
+
+**Contents:**
+- Samples
+- Required arguments
+- Optional arguments
+- Returns
+
+
+
+Add refresh, compression, and data retention policies to a continuous aggregate
+in one step. The added compression and retention policies apply to the
+continuous aggregate, _not_ to the original hypertable.
+
+Experimental features could have bugs. They might not be backwards compatible,
+and could be removed in future releases. Use these features at your own risk, and
+do not use any experimental features in production.
+
+`add_policies()` does not allow the `schedule_interval` for the continuous aggregate to be set, instead using a default value of 1 hour.
+
+If you would like to set this add your policies manually (see [`add_continuous_aggregate_policy`][add_continuous_aggregate_policy]).
+
+Given a continuous aggregate named `example_continuous_aggregate`, add three
+policies to it:
+
+1. Regularly refresh the continuous aggregate to materialize data between 1 day
+ and 2 days old.
+1. Compress data in the continuous aggregate after 20 days.
+1. Drop data in the continuous aggregate after 1 year.
+
+## Required arguments
+
+|Name|Type|Description|
+|-|-|-|
+|`relation`|`REGCLASS`|The continuous aggregate that the policies should be applied to|
+
+## Optional arguments
+
+|Name|Type|Description|
+|-|-|-|
+|`if_not_exists`|`BOOL`|When true, prints a warning instead of erroring if the continuous aggregate doesn't exist. Defaults to false.|
+|`refresh_start_offset`|`INTERVAL` or `INTEGER`|The start of the continuous aggregate refresh window, expressed as an offset from the policy run time.|
+|`refresh_end_offset`|`INTERVAL` or `INTEGER`|The end of the continuous aggregate refresh window, expressed as an offset from the policy run time. Must be greater than `refresh_start_offset`.|
+|`compress_after`|`INTERVAL` or `INTEGER`|Continuous aggregate chunks are compressed if they exclusively contain data older than this interval.|
+|`drop_after`|`INTERVAL` or `INTEGER`|Continuous aggregate chunks are dropped if they exclusively contain data older than this interval.|
+
+For arguments that could be either an `INTERVAL` or an `INTEGER`, use an
+`INTERVAL` if your time bucket is based on timestamps. Use an `INTEGER` if your
+time bucket is based on integers.
+
+Returns `true` if successful.
+
+
+
+
+===== PAGE: https://docs.tigerdata.com/api/continuous-aggregates/create_materialized_view/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+timescaledb_experimental.add_policies(
+ relation REGCLASS,
+ if_not_exists BOOL = false,
+ refresh_start_offset "any" = NULL,
+ refresh_end_offset "any" = NULL,
+ compress_after "any" = NULL,
+ drop_after "any" = NULL)
+) RETURNS BOOL
+```
+
+Example 2 (sql):
+```sql
+SELECT timescaledb_experimental.add_policies(
+ 'example_continuous_aggregate',
+ refresh_start_offset => '1 day'::interval,
+ refresh_end_offset => '2 day'::interval,
+ compress_after => '20 days'::interval,
+ drop_after => '1 year'::interval
+);
+```
+
+---
+
+## About writing data
+
+**URL:** llms-txt#about-writing-data
+
+TimescaleDB supports writing data in the same way as Postgres, using `INSERT`,
+`UPDATE`, `INSERT ... ON CONFLICT`, and `DELETE`.
+
+TimescaleDB is optimized for running real-time analytics workloads on time-series data. For this reason, hypertables are optimized for
+inserts to the most recent time intervals. Inserting data with recent time
+values gives
+[excellent performance](https://www.timescale.com/blog/postgresql-timescaledb-1000x-faster-queries-90-data-compression-and-much-more).
+However, if you need to make frequent updates to older time intervals, you
+might see lower write throughput.
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/write-data/upsert/ =====
+
+---
+
+## Decompression
+
+**URL:** llms-txt#decompression
+
+**Contents:**
+- Decompress chunks manually
+ - Decompress individual chunks
+ - Decompress chunks by time
+ - Decompress chunks on more precise constraints
+
+Old API since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0) Replaced by `convert_to_rowstore`.
+
+When compressing your data, you can reduce the amount of storage space used. But you should always leave some additional storage
+capacity. This gives you the flexibility to decompress chunks when necessary,
+for actions such as bulk inserts.
+
+This section describes commands to use for decompressing chunks. You can filter
+by time to select the chunks you want to decompress.
+
+## Decompress chunks manually
+
+Before decompressing chunks, stop any compression policy on the hypertable you are decompressing.
+The database automatically recompresses your chunks in the next scheduled job.
+If you accumulate a large amount of chunks that need to be compressed, the [troubleshooting guide][troubleshooting-oom-chunks] shows how to compress a backlog of chunks.
+For more information on how to stop and run compression policies using `alter_job()`, see the [API reference][api-reference-alter-job].
+
+There are several methods for selecting chunks and decompressing them.
+
+### Decompress individual chunks
+
+To decompress a single chunk by name, run this command:
+
+where, `` is the name of the chunk you want to decompress.
+
+### Decompress chunks by time
+
+To decompress a set of chunks based on a time range, you can use the output of
+`show_chunks` to decompress each one:
+
+For more information about the `decompress_chunk` function, see the `decompress_chunk`
+[API reference][api-reference-decompress].
+
+### Decompress chunks on more precise constraints
+
+If you want to use more precise matching constraints, for example space
+partitioning, you can construct a command like this:
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/compression/compression-on-continuous-aggregates/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+SELECT decompress_chunk('_timescaledb_internal.');
+```
+
+Example 2 (sql):
+```sql
+SELECT decompress_chunk(c, true)
+ FROM show_chunks('table_name', older_than, newer_than) c;
+```
+
+Example 3 (sql):
+```sql
+SELECT tableoid::regclass FROM metrics
+ WHERE time = '2000-01-01' AND device_id = 1
+ GROUP BY tableoid;
+
+ tableoid
+------------------------------------------
+ _timescaledb_internal._hyper_72_37_chunk
+```
+
+---
+
+## Designing your database for compression
+
+**URL:** llms-txt#designing-your-database-for-compression
+
+**Contents:**
+- Compressing data
+- Querying compressed data
+
+Old API since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0) Replaced by hypercore.
+
+Time-series data can be unique, in that it needs to handle both shallow and wide
+queries, such as "What's happened across the deployment in the last 10 minutes,"
+and deep and narrow, such as "What is the average CPU usage for this server
+over the last 24 hours." Time-series data usually has a very high rate of
+inserts as well; hundreds of thousands of writes per second can be very normal
+for a time-series dataset. Additionally, time-series data is often very
+granular, and data is collected at a higher resolution than many other
+datasets. This can result in terabytes of data being collected over time.
+
+All this means that if you need great compression rates, you probably need to
+consider the design of your database, before you start ingesting data. This
+section covers some of the things you need to take into consideration when
+designing your database for maximum compression effectiveness.
+
+TimescaleDB is built on Postgres which is, by nature, a row-based database.
+Because time-series data is accessed in order of time, when you enable
+compression, TimescaleDB converts many wide rows of data into a single row of
+data, called an array form. This means that each field of that new, wide row
+stores an ordered set of data comprising the entire column.
+
+For example, if you had a table with data that looked a bit like this:
+
+|Timestamp|Device ID|Status Code|Temperature|
+|-|-|-|-|
+|12:00:01|A|0|70.11|
+|12:00:01|B|0|69.70|
+|12:00:02|A|0|70.12|
+|12:00:02|B|0|69.69|
+|12:00:03|A|0|70.14|
+|12:00:03|B|4|69.70|
+
+You can convert this to a single row in array form, like this:
+
+|Timestamp|Device ID|Status Code|Temperature|
+|-|-|-|-|
+|[12:00:01, 12:00:01, 12:00:02, 12:00:02, 12:00:03, 12:00:03]|[A, B, A, B, A, B]|[0, 0, 0, 0, 0, 4]|[70.11, 69.70, 70.12, 69.69, 70.14, 69.70]|
+
+Even before you compress any data, this format immediately saves storage by
+reducing the per-row overhead. Postgres typically adds a small number of bytes
+of overhead per row. So even without any compression, the schema in this example
+is now smaller on disk than the previous format.
+
+This format arranges the data so that similar data, such as timestamps, device
+IDs, or temperature readings, is stored contiguously. This means that you can
+then use type-specific compression algorithms to compress the data further, and
+each array is separately compressed. For more information about the compression
+methods used, see the [compression methods section][compression-methods].
+
+When the data is in array format, you can perform queries that require a subset
+of the columns very quickly. For example, if you have a query like this one, that
+asks for the average temperature over the past day:
+
+ now() - interval ‘1 day’
+ORDER BY minute DESC
+GROUP BY minute;
+`} />
+
+The query engine can fetch and decompress only the timestamp and temperature
+columns to efficiently compute and return these results.
+
+Finally, TimescaleDB uses non-inline disk pages to store the compressed arrays.
+This means that the in-row data points to a secondary disk page that stores the
+compressed array, and the actual row in the main table becomes very small,
+because it is now just pointers to the data. When data stored like this is
+queried, only the compressed arrays for the required columns are read from disk,
+further improving performance by reducing disk reads and writes.
+
+## Querying compressed data
+
+In the previous example, the database has no way of knowing which rows need to
+be fetched and decompressed to resolve a query. For example, the database can't
+easily determine which rows contain data from the past day, as the timestamp
+itself is in a compressed column. You don't want to have to decompress all the
+data in a chunk, or even an entire hypertable, to determine which rows are
+required.
+
+TimescaleDB automatically includes more information in the row and includes
+additional groupings to improve query performance. When you compress a
+hypertable, either manually or through a compression policy, it can help to specify
+an `ORDER BY` column.
+
+`ORDER BY` columns specify how the rows that are part of a compressed batch are
+ordered. For most time-series workloads, this is by timestamp, so if you don't
+specify an `ORDER BY` column, TimescaleDB defaults to using the time column. You
+can also specify additional dimensions, such as location.
+
+For each `ORDER BY` column, TimescaleDB automatically creates additional columns
+that store the minimum and maximum value of that column. This way, the query
+planner can look at the range of timestamps in the compressed column, without
+having to do any decompression, and determine whether the row could possibly
+match the query.
+
+When you compress your hypertable, you can also choose to specify a `SEGMENT BY`
+column. This allows you to segment compressed rows by a specific column, so that
+each compressed row corresponds to a data about a single item such as, for
+example, a specific device ID. This further allows the query planner to
+determine if the row could possibly match the query without having to decompress
+the column first. For example:
+
+|Device ID|Timestamp|Status Code|Temperature|Min Timestamp|Max Timestamp|
+|-|-|-|-|-|-|
+|A|[12:00:01, 12:00:02, 12:00:03]|[0, 0, 0]|[70.11, 70.12, 70.14]|12:00:01|12:00:03|
+|B|[12:00:01, 12:00:02, 12:00:03]|[0, 0, 4]|[69.70, 69.69, 69.70]|12:00:01|12:00:03|
+
+With the data segmented in this way, a query for device A between a time
+interval becomes quite fast. The query planner can use an index to find those
+rows for device A that contain at least some timestamps corresponding to the
+specified interval, and even a sequential scan is quite fast since evaluating
+device IDs or timestamps does not require decompression. This means the
+query executor only decompresses the timestamp and temperature columns
+corresponding to those selected rows.
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/compression/compression-policy/ =====
+
+---
+
+## remove_compression_policy()
+
+**URL:** llms-txt#remove_compression_policy()
+
+**Contents:**
+- Samples
+- Required arguments
+- Optional arguments
+
+Old API since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0) Replaced by remove_columnstore_policy().
+
+If you need to remove the compression policy. To restart policy-based
+compression you need to add the policy again. To view the policies that
+already exist, see [informational views][informational-views].
+
+Remove the compression policy from the 'cpu' table:
+
+Remove the compression policy from the 'cpu_weekly' continuous aggregate:
+
+## Required arguments
+
+|Name|Type|Description|
+|-|-|-|
+|`hypertable`|REGCLASS|Name of the hypertable or continuous aggregate the policy should be removed from|
+
+## Optional arguments
+
+|Name|Type|Description|
+|---|---|---|
+| `if_exists` | BOOLEAN | Setting to true causes the command to fail with a notice instead of an error if a compression policy does not exist on the hypertable. Defaults to false.|
+
+===== PAGE: https://docs.tigerdata.com/api/compression/alter_table_compression/ =====
+
+**Examples:**
+
+Example 1 (unknown):
+```unknown
+Remove the compression policy from the 'cpu_weekly' continuous aggregate:
+```
+
+---
+
+## About compression methods
+
+**URL:** llms-txt#about-compression-methods
+
+**Contents:**
+- Integer compression
+ - Delta encoding
+ - Delta-of-delta encoding
+ - Simple-8b
+ - Run-length encoding
+- Floating point compression
+ - XOR-based compression
+- Data-agnostic compression
+ - Dictionary compression
+
+Depending on the data type that is compressed when your data is converted from the rowstore to the
+columnstore, TimescaleDB uses the following compression algorithms:
+
+- **Integers, timestamps, boolean and other integer-like types**: a combination of the following compression
+ methods is used: [delta encoding][delta], [delta-of-delta][delta-delta], [simple-8b][simple-8b], and
+ [run-length encoding][run-length].
+- **Columns that do not have a high amount of repeated values**: [XOR-based][xor] compression with
+ some [dictionary compression][dictionary].
+- **All other types**: [dictionary compression][dictionary].
+
+This page gives an in-depth explanation of the compression methods used in hypercore.
+
+## Integer compression
+
+For integers, timestamps, and other integer-like types TimescaleDB uses a
+combination of delta encoding, delta-of-delta, simple 8-b, and run-length
+encoding.
+
+The simple-8b compression method has been extended so that data can be
+decompressed in reverse order. Backward scanning queries are common in
+time-series workloads. This means that these types of queries run much faster.
+
+Delta encoding reduces the amount of information required to represent a data
+object by only storing the difference, sometimes referred to as the delta,
+between that object and one or more reference objects. These algorithms work
+best where there is a lot of redundant information, and it is often used in
+workloads like versioned file systems. For example, this is how Dropbox keeps
+your files synchronized. Applying delta-encoding to time-series data means that
+you can use fewer bytes to represent a data point, because you only need to
+store the delta from the previous data point.
+
+For example, imagine you had a dataset that collected CPU, free memory,
+temperature, and humidity over time. If you time column was stored as an integer
+value, like seconds since UNIX epoch, your raw data would look a little like
+this:
+
+|time|cpu|mem_free_bytes|temperature|humidity|
+|-|-|-|-|-|
+|2023-04-01 10:00:00|82|1,073,741,824|80|25|
+|2023-04-01 10:00:05|98|858,993,459|81|25|
+|2023-04-01 10:00:10|98|858,904,583|81|25|
+
+With delta encoding, you only need to store how much each value changed from the
+previous data point, resulting in smaller values to store. So after the first
+row, you can represent subsequent rows with less information, like this:
+
+|time|cpu|mem_free_bytes|temperature|humidity|
+|-|-|-|-|-|
+|2023-04-01 10:00:00|82|1,073,741,824|80|25|
+|5 seconds|16|-214,748,365|1|0|
+|5 seconds|0|-88,876|0|0|
+
+Applying delta encoding to time-series data takes advantage of the fact that
+most time-series datasets are not random, but instead represent something that
+is slowly changing over time. The storage savings over millions of rows can be
+substantial, especially if the value changes very little, or doesn't change at
+all.
+
+### Delta-of-delta encoding
+
+Delta-of-delta encoding takes delta encoding one step further and applies
+delta-encoding over data that has previously been delta-encoded. With
+time-series datasets where data collection happens at regular intervals, you can
+apply delta-of-delta encoding to the time column, which results in only needing to
+store a series of zeroes.
+
+In other words, delta encoding stores the first derivative of the dataset, while
+delta-of-delta encoding stores the second derivative of the dataset.
+
+Applied to the example dataset from earlier, delta-of-delta encoding results in this:
+
+|time|cpu|mem_free_bytes|temperature|humidity|
+|-|-|-|-|-|
+|2020-04-01 10:00:00|82|1,073,741,824|80|25|
+|5 seconds|16|-214,748,365|1|0|
+|0 seconds|0|-88,876|0|0|
+
+In this example, delta-of-delta further compresses 5 seconds in the time column
+down to 0 for every entry in the time column after the second row, because the
+five second gap remains constant for each entry. Note that you see two entries
+in the table before the delta-delta 0 values, because you need two deltas to
+compare.
+
+This compresses a full timestamp of 8 bytes, or 64 bits, down to just a single
+bit, resulting in 64x compression.
+
+With delta and delta-of-delta encoding, you can significantly reduce the number
+of digits you need to store. But you still need an efficient way to store the
+smaller integers. The previous examples used a standard integer datatype for the
+time column, which needs 64 bits to represent the value of 0 when delta-delta
+encoded. This means that even though you are only storing the integer 0, you are
+still consuming 64 bits to store it, so you haven't actually saved anything.
+
+Simple-8b is one of the simplest and smallest methods of storing variable-length
+integers. In this method, integers are stored as a series of fixed-size blocks.
+For each block, every integer within the block is represented by the minimal
+bit-length needed to represent the largest integer in that block. The first bits
+of each block denotes the minimum bit-length for the block.
+
+This technique has the advantage of only needing to store the length once for a
+given block, instead of once for each integer. Because the blocks are of a fixed
+size, you can infer the number of integers in each block from the size of the
+integers being stored.
+
+For example, if you wanted to store a temperature that changed over time, and
+you applied delta encoding, you might end up needing to store this set of
+integers:
+
+|temperature (deltas)|
+|-|
+|1|
+|10|
+|11|
+|13|
+|9|
+|100|
+|22|
+|11|
+
+With a block size of 10 digits, you could store this set of integers as two
+blocks: one block storing 5 2-digit numbers, and a second block storing 3
+3-digit numbers, like this:
+
+
+
+In this example, both blocks store about 10 digits worth of data, even though
+some of the numbers have to be padded with a leading 0. You might also notice
+that the second block only stores 9 digits, because 10 is not evenly divisible
+by 3.
+
+Simple-8b works in this way, except it uses binary numbers instead of decimal,
+and it usually uses 64-bit blocks. In general, the longer the integer, the fewer
+number of integers that can be stored in each block.
+
+### Run-length encoding
+
+Simple-8b compresses integers very well, however, if you have a large number of
+repeats of the same value, you can get even better compression with run-length
+encoding. This method works well for values that don't change very often, or if
+an earlier transformation removes the changes.
+
+Run-length encoding is one of the classic compression algorithms. For
+time-series data with billions of contiguous zeroes, or even a document with a
+million identically repeated strings, run-length encoding works incredibly well.
+
+For example, if you wanted to store a temperature that changed minimally over
+time, and you applied delta encoding, you might end up needing to store this set
+of integers:
+
+|temperature (deltas)|
+|-|
+|11|
+|12|
+|12|
+|12|
+|12|
+|12|
+|12|
+|1|
+|12|
+|12|
+|12|
+|12|
+
+For values like these, you do not need to store each instance of the value, but
+rather how long the run, or number of repeats, is. You can store this set of
+numbers as `{run; value}` pairs like this:
+
+
+
+This technique uses 11 digits of storage (1, 1, 1, 6, 1, 2, 1, 1, 4, 1, 2),
+rather than 23 digits that an optimal series of variable-length integers
+requires (11, 12, 12, 12, 12, 12, 12, 1, 12, 12, 12, 12).
+
+Run-length encoding is also used as a building block for many more advanced
+algorithms, such as Simple-8b RLE, which is an algorithm that combines
+run-length and Simple-8b techniques. TimescaleDB implements a variant of
+Simple-8b RLE. This variant uses different sizes to standard Simple-8b, in order
+to handle 64-bit values, and RLE.
+
+## Floating point compression
+
+For columns that do not have a high amount of repeated values, TimescaleDB uses
+XOR-based compression.
+
+The standard XOR-based compression method has been extended so that data can be
+decompressed in reverse order. Backward scanning queries are common in
+time-series workloads. This means that queries that use backwards scans run much
+faster.
+
+### XOR-based compression
+
+Floating point numbers are usually more difficult to compress than integers.
+Fixed-length integers often have leading zeroes, but floating point numbers usually
+use all of their available bits, especially if they are converted from decimal
+numbers, which can't be represented precisely in binary.
+
+Techniques like delta-encoding don't work well for floats, because they do not
+reduce the number of bits sufficiently. This means that most floating-point
+compression algorithms tend to be either complex and slow, or truncate
+significant digits. One of the few simple and fast lossless floating-point
+compression algorithms is XOR-based compression, built on top of Facebook's
+Gorilla compression.
+
+XOR is the binary function `exclusive or`. In this algorithm, successive
+floating point numbers are compared with XOR, and a difference results in a bit
+being stored. The first data point is stored without compression, and subsequent
+data points are represented using their XOR'd values.
+
+## Data-agnostic compression
+
+For values that are not integers or floating point, TimescaleDB uses dictionary
+compression.
+
+### Dictionary compression
+
+One of the earliest lossless compression algorithms, dictionary compression is
+the basis of many popular compression methods. Dictionary compression can also
+be found in areas outside of computer science, such as medical coding.
+
+Instead of storing values directly, dictionary compression works by making a
+list of the possible values that can appear, and then storing an index into a
+dictionary containing the unique values. This technique is quite versatile, can
+be used regardless of data type, and works especially well when you have a
+limited set of values that repeat frequently.
+
+For example, if you had the list of temperatures shown earlier, but you wanted
+an additional column storing a city location for each measurement, you might
+have a set of values like this:
+
+|City|
+|-|
+|New York|
+|San Francisco|
+|San Francisco|
+|Los Angeles|
+
+Instead of storing all the city names directly, you can instead store a
+dictionary, like this:
+
+
+
+You can then store just the indices in your column, like this:
+
+|City|
+|-|
+|0|
+|1|
+|1|
+|2|
+
+For a dataset with a lot of repetition, this can offer significant compression.
+In the example, each city name is on average 11 bytes in length, while the
+indices are never going to be more than 4 bytes long, reducing space usage
+nearly 3 times. In TimescaleDB, the list of indices is compressed even further
+with the Simple-8b+RLE method, making the storage cost even smaller.
+
+Dictionary compression doesn't always result in savings. If your dataset doesn't
+have a lot of repeated values, then the dictionary is the same size as the
+original data. TimescaleDB automatically detects this case, and falls back to
+not using a dictionary in that scenario.
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/compression/modify-a-schema/ =====
+
+---
+
+## Changelog
+
+**URL:** llms-txt#changelog
+
+**Contents:**
+- TimescaleDB 2.22.1 – configurable indexing, enhanced partitioning, and faster queries
+ - Highlighted features
+ - Deprecations
+- Kafka Source Connector (beta)
+- Phased update rollouts, `pg_cron`, larger compute options, and backup reports
+ - 🛡️ Phased rollouts for TimescaleDB minor releases
+ - ⏰ pg_cron extension
+ - ⚡️ Larger compute options: 48 and 64 CPU
+ - 📋 Backup report for compliance
+ - 🗺️ New router for Tiger Cloud Console
+
+All the latest features and updates to Tiger Cloud.
+
+## TimescaleDB 2.22.1 – configurable indexing, enhanced partitioning, and faster queries
+
+
+[TimescaleDB 2.22.1](https://github.com/timescale/timescaledb/releases) introduces major performance and flexibility improvements across indexing, compression, and query execution. TimescaleDB 2.22.1 was released on September 30th and is now available to all users of Tiger.
+
+### Highlighted features
+
+* **Configurable sparse indexes:** manually configure sparse indexes (min-max or bloom) on one or more columns of compressed hypertables, optimizing query performance for specific workloads and reducing I/O. In previous versions, these were automatically created based on heuristics and could not be modified.
+
+* **UUIDv7 support:** native support for UUIDv7 for both compression and partitioning. UUIDv7 embeds a time component, improving insert locality and enabling efficient time-based range queries while maintaining global uniqueness.
+
+* **Vectorized UUID compression:** new vectorized compression for UUIDv7 columns doubles query performance and improves storage efficiency by up to 30%.
+
+* **UUIDv7 partitioning:** hypertables can now be partitioned on UUIDv7 columns, combining time-based chunking with globally unique IDs—ideal for large-scale event and log data.
+
+* **Multi-column SkipScan:** expands SkipScan to support multiple distinct keys, delivering millisecond-fast deduplication and `DISTINCT ON` queries across billions of rows. Learn more in our [blog post](https://www.tigerdata.com/blog/skipscan-in-timescaledb-why-distinct-was-slow-how-we-built-it-and-how-you-can-use-it) and [documentation](https://docs.tigerdata.com/use-timescale/latest/query-data/skipscan/).
+* **Compression improvements:** default `segmentby` and `orderby` settings are now applied at compression time for each chunk, automatically adapting to evolving data patterns for better performance. This was previously set at the hypertable level and fixed across all chunks.
+
+The experimental Hypercore Table Access Method (TAM) has been removed in this release following advancements in the columnstore architecture.
+
+For a comprehensive list of changes, refer to the TimescaleDB [2.22](https://github.com/timescale/timescaledb/releases/tag/2.22.0) & [2.22.1](https://github.com/timescale/timescaledb/releases/tag/2.22.1) release notes.
+
+## Kafka Source Connector (beta)
+
+
+The new [Kafka Source Connector](https://docs.tigerdata.com/migrate/latest/livesync-for-kafka/) enables you to connect your existing Kafka clusters directly to Tiger Cloud and ingest data from Kafka topics into hypertables. Developers often build proxies or run JDBC Sink Connectors to bridge Kafka and Tiger Cloud, which is error-prone and time-consuming. With the Kafka Source Connector, you can seamlessly start ingesting your Kafka data natively without additional middleware.
+
+- Supported formats: AVRO
+- Supported platforms: Confluent Cloud and Amazon Managed Streaming for Apache Kafka
+
+
+
+
+
+## Phased update rollouts, `pg_cron`, larger compute options, and backup reports
+
+
+### 🛡️ Phased rollouts for TimescaleDB minor releases
+
+Starting with TimescaleDB 2.22.0, minor releases will now roll out in phases. Services tagged `#dev` will get upgraded first, followed by `#prod` after 21 days. This gives you time to validate upgrades in `#dev` before they reach `#prod` services. [Subscribe](https://status.timescale.com/?__hstc=231067136.cc62bfc44030d30e3b1c3d1bc78c9cab.1750169693582.1757669826871.1757685085606.116&__hssc=231067136.4.1757685085606&__hsfp=2801608430) to get an email notification before your `#prod` service is upgraded. See [Maintenance and upgrades](https://docs.tigerdata.com/use-timescale/latest/upgrades/) for details.
+
+### ⏰ pg_cron extension
+
+`pg_cron` is now available on Tiger Cloud! With `pg_cron`, you can:
+- Schedule SQL commands to run automatically—like generating weekly sales reports or cleaning up old log entries every night at 2 AM.
+- Automate routine maintenance tasks such as refreshing materialized views hourly to keep dashboards current.
+- Eliminate external cron jobs and task schedulers, keeping all your automation logic within PostgreSQL.
+
+To enable `pg_cron` on your service, contact our support team. We're working on making this self-service in future updates.
+
+### ⚡️ Larger compute options: 48 and 64 CPU
+
+For the most demanding workloads, you can now create services with 48 and 64 CPUs. These options are only available on our Enterprise plan, and they're dedicated instances that are not shared with other customers.
+
+
+
+### 📋 Backup report for compliance
+
+Scale and Enterprise customers can now see a list of their backups in Tiger Cloud Console. For customers with SOC 2 or other compliance needs, this serves as auditable proof of backups.
+
+
+
+### 🗺️ New router for Tiger Cloud Console
+
+The UI just got snappier and easier to navigate with improved interlinking. For example, click an object in the `Jobs` page to see what hypertable the job is associated with.
+
+## New data import wizard
+
+
+To make navigation easier, we’ve introduced a cleaner, more intuitive UI for data import. It highlights the most common and recommended option, PostgreSQL Dump & Restore, while organizing all import options into clear categories, to make navigation easier.
+
+The new categories include:
+- **PostgreSQL Dump & Restore**
+- **Upload Files**: CSV, Parquet, TXT
+- **Real-time Data Replication**: source connectors
+- **Migrations & Other Options**
+
+
+
+A new data import component has been added to the overview dashboard, providing a clear view of your imports. This includes quick start, in-progress status, and completed imports:
+
+
+
+## 🚁 Enhancements to the Postgres source connector
+
+
+- **Easy table selection**: You can now sync the complete source schema in one go. Select multiple tables from the
+ drop-down menu and start the connector.
+- **Sync metadata**: Connectors now display the following detailed metadata:
+ - `Initial data copy`: The number of rows copied at any given point in time.
+ - `Change data capture`: The replication lag represented in time and data size.
+- **Improved UX design**: In-progress syncs with separate sections showing the tables and metadata for
+ `initial data copy` and `change data capture`, plus a dedicated tab where you can add more tables to the connector.
+
+
+
+## 🦋 Developer role GA and hypertable transformation in Console
+
+
+### Developer role (GA)
+
+The [Developer role in Tiger Cloud](https://docs.tigerdata.com/use-timescale/latest/security/members/) is now
+generally available. It’s a project‑scoped permission set that lets technical users build and
+operate services, create or modify resources, run queries, and use observability—without admin or billing access.
+This enforces least‑privilege by default, reducing risk and audit noise, while keeping governance with Admins/Owners and
+billing with Finance. This means faster delivery (fewer access escalations), protected sensitive settings,
+and clear boundaries, so the right users can ship changes safely, while compliance and cost control remain intact.
+
+### Transform a table to a hypertable from the Explorer
+
+In Console, you can now easily create hypertables from your regular Postgres tables directly from the Explorer.
+Clicking on any Postgres table shows an option to open up the hypertable action. Follow the simple steps to set up your
+partition key and transform the table to a hypertable.
+
+
+
+
+
+## Cross-region backups, Postgres options, and onboarding
+
+
+### Cross-region backups
+
+You can now store backups in a different region than your service, which improves resilience and helps meet enterprise compliance requirements. Cross‑region backups are available on our Enterprise plan for free at launch; usage‑based billing may be introduced later. For full details, please [see the docs](https://docs.tigerdata.com/use-timescale/latest/backup-restore/#enable-cross-region-backup).
+
+### Standard Postgres instructions for onboarding
+We have added basic instructions for INSERT, UPDATE, DELETE commands to the Tiger Cloud console. It's now shown as an option in the Import Data page.
+
+### Postgres-only service type
+In Tiger Cloud, you now have an option to choose Postgres-only in the service creation flow. Just click `Looking for plan PostgreSQL?` on the `Service Type` screen.
+
+## Viewer role GA, EXPLAIN plans, and chunk index sizes in Explorer
+
+
+### GA release of the viewer role in role-based access
+
+The viewer role is now **generally available** for all projects and
+organizations. It provides **read-only access** to services, metrics, and logs
+without modify permissions. Viewers **cannot** create, update, or delete
+resources, nor manage users or billing. It's ideal for auditors, analysts, and
+cross-functional collaborators who need visibility but not control.
+
+### EXPLAIN plans in Insights
+
+You can now find automatically generated EXPLAIN plans on queries that take
+longer than 10 seconds within Insights. EXPLAIN plans can be very useful to
+determine how you may be able to increase the performance of your queries.
+
+### Chunk index size in Explorer
+
+Find the index size of hypertable chunks in the Explorer.
+This information can be very valuable to determine if a hypertable's chunk size
+is properly configured.
+
+## TimescaleDB v2.21 and catalog objects in the Console Explorer
+
+
+### 🏎️ TimescaleDB v2.21—ingest millions of rows/second and faster columnstore UPSERTs and DELETEs
+
+TimescaleDB v2.21 was released on July 8 and is now available to all developers on Tiger Cloud.
+
+Highlighted features in TimescaleDB v2.21 include:
+- **High-scale ingestion performance (tech preview)**: introducing a new approach that compresses data directly into the columnstore during ingestion, demonstrating over 1.2M rows/second in tests with bursts over 50M rows/second. We are actively seeking design partners for this feature.
+- **Faster data updates (UPSERTs)**: columnstore UPSERTs are now 2.5x faster for heavily constrained tables, building on the 10x improvement from v2.20.
+- **Faster data deletion**: DELETE operations on non-segmentby columns are 42x faster, reducing I/O and bloat.
+- **Reduced bloat after recompression**: optimized recompression processes lead to less bloat and more efficient storage.
+- **Enhanced continuous aggregates**:
+ - Concurrent refresh policies enable multiple continuous aggregates to update concurrently.
+ - Batched refreshes are now enabled by default for more efficient processing.
+- **Complete chunk management**: full support for splitting columnstore chunks, complementing the existing merge capabilities.
+
+For a comprehensive list of changes, refer to the [TimescaleDB v2.21 release notes](https://github.com/timescale/timescaledb/releases/tag/2.21.0).
+
+### 🔬 Catalog objects available in the Console Explorer
+
+You can now view catalog objects in the Console Explorer. Check out the internal schemas for PostgreSQL and TimescaleDB to better understand the inner workings of your database. To turn on/off visibility, select your service in Tiger Cloud Console, then click `Explorer` and toggle `Show catalog objects`.
+
+
+
+## Iceberg Destination Connector (Tiger Lake)
+
+
+We have released a beta Iceberg destination connector that enables Scale and Enterprise users to integrate Tiger Cloud services with Amazon S3 tables. This enables you to connect Tiger Cloud to data lakes seamlessly. We are actively developing several improvements that will make the overall data lake integration process even smoother.
+
+To use this feature, select your service in Tiger Cloud Console, then navigate to `Connectors` and select the `Amazon S3 Tables` destination connector. Integrate the connector to your S3 table bucket by providing the ARN roles, then simply select the tables that you want to sync into S3 tables. See the [documentation](https://docs.tigerdata.com/use-timescale/latest/tigerlake/) for details.
+
+## 🔆Console just got better
+
+
+### ✏️ Editable jobs in Console
+
+You can now edit jobs directly in Console! We've added the handy pencil icon in the top right corner of any
+job view. Click a job, hit the edit button, then make your changes. This works for all jobs, even user-defined ones.
+Tiger Cloud jobs come with custom wizards to guide you through the right inputs. This means you can spot and fix
+issues without leaving the UI - a small change that makes a big difference!
+
+
+
+### 📊 Connection history
+
+Now you can see your historical connection counts right in the Connections tab! This helps spot those pesky connection
+management bugs before they impact your app. We're logging max connections every hour (sampled every 5 mins) and might
+adjust based on your feedback. Just another way we're making the Console more powerful for troubleshooting.
+
+
+
+### 🔐 New in Public Beta: Read-Only Access through RBAC
+
+We’ve just launched Read/Viewer-only access for Tiger Cloud projects into public beta!
+
+You can now invite users with view-only permissions — perfect for folks who need to see dashboards, metrics,
+and query results, without the ability to make changes.
+
+This has been one of our most requested RBAC features, and it's a big step forward in making Tiger Cloud more secure and
+collaborative.
+
+No write access. No config changes. Just visibility.
+
+In Console, Go to `Project Settings` > `Users & Roles` to try it out, and let us know what you think!
+
+## 👀 Super useful doc updates
+
+
+### Updates to instructions for livesync
+
+In the Console UI, we have clarified the step-by-step procedure for setting up your livesync from self-hosted installations by:
+- Adding definitions for some flags when running your Docker container.
+- Including more detailed examples of the output from the table synchronization list.
+
+### New optional argument for add_continuous_aggregate_policy API
+
+Added the new `refresh_newest_first` optional argument that controls the order of incremental refreshes.
+
+## 🚀 Multi-command queries in SQL editor, improved job page experience, multiple AWS Transit Gateways, and a new service creation flow
+
+
+### Run multiple statements in SQL editor
+Execute complex queries with multiple commands in a single run—perfect for data transformations, table setup, and batch operations.
+
+### Branch conversations in SQL assistant
+Start new discussion threads from any point in your SQL assistant chat to explore different approaches to your data questions more easily.
+
+### Smarter results table
+- Expand JSON data instantly: turn complex JSON objects into readable columns with one click—no more digging through nested data structures.
+- Filter with precision: use a new smart filter to pick exactly what you want from a dropdown of all available values.
+
+### Jobs page improvements
+Individual job pages now display their corresponding configuration for TimescaleDB job types—for example, columnstore, retention, CAgg refreshes, tiering, and others.
+
+### Multiple AWS Transit Gateways
+
+You can now connect multiple AWS Transit Gateways, when those gateways use overlapping CIDRs. Ideal for teams with zero-trust policies, this lets you keep each network path isolated.
+
+How it works: when you create a new peering connection, Tiger Cloud reuses the existing Transit Gateway if you supply the same ID—otherwise it automatically creates a new, isolated Transit Gateway.
+
+### Updated service creation flow
+
+The new service creation flow makes the choice of service type clearer. You can now create distinct types with Postgres extensions for real-time analytics (TimescaleDB), AI (pgvectorscale, pgai), and RTA/AI hybrid applications.
+
+
+
+## ⚙️ Improved Terraform support and TimescaleDB v2.20.3
+
+
+### Terraform support for Exporters and AWS Transit Gateway
+
+The latest version of the Timescale Terraform provider (2.3.0) adds support for:
+- Creating and attaching observability exporters to your services.
+- Securing the connections to your Timescale Cloud services with AWS Transit Gateway.
+- Configuring CIDRs for VPC and AWS Transit Gateway connections.
+
+Check the [Timescale Terraform provider documentation](https://registry.terraform.io/providers/timescale/timescale/latest/docs) for more details.
+
+### TimescaleDB v2.20.3
+
+This patch release for TimescaleDB v2.20 includes several bug fixes and minor improvements.
+Notable bug fixes include:
+- Adjustments to SkipScan costing for queries that require a full scan of indexed data.
+- A fix for issues encountered during dump and restore operations when chunk skipping is enabled.
+- Resolution of a bug related to dropped "quals" (qualifications/conditions) in SkipScan.
+
+For a comprehensive list of changes, refer to the [TimescaleDB 2.20.3 release notes](https://github.com/timescale/timescaledb/releases/tag/2.20.3).
+
+## 🧘 Read replica sets, faster tables, new anthropic models, and VPC support in data mode
+
+
+### Horizontal read scaling with read replica sets
+
+[Read replica sets](https://docs.timescale.com/use-timescale/latest/ha-replicas/read-scaling/) are an improved version of read replicas. They let you scale reads horizontally by creating up to 10 replica nodes behind a single read endpoint. Just point your read queries to the endpoint and configure the number of replicas you need without changing your application logic. You can increase or decrease the number of replicas in the set dynamically, with no impact on the endpoint.
+
+Read replica sets are used to:
+
+- Scale reads for read-heavy workloads and dashboards.
+- Isolate internal analytics and reporting from customer-facing applications.
+- Provide high availability and fault tolerance for read traffic.
+
+All existing read replicas have been automatically upgraded to a replica set with one node—no action required. Billing remains the same.
+
+Read replica sets are available for all Scale and Enterprise customers.
+
+
+
+### Faster, smarter results tables in data mode
+
+We've completely rebuilt how query results are displayed in the data mode to give you a faster, more powerful way to work with your data. The new results table can handle millions of rows with smooth scrolling and instant responses when you sort, filter, or format your data. You'll find it today in notebooks and presentation pages, with more areas coming soon.
+
+- **Your settings stick around**: when you customize how your table looks—applying filters, sorting columns, or formatting data—those settings are automatically saved. Switch to another tab and come back, and everything stays exactly how you left it.
+- **Better ways to find what you need**: filter your results by any column value, with search terms highlighted so you can quickly spot what you're looking for. The search box is now available everywhere you work with data.
+- **Export exactly what you want**: download your entire table or just select the specific rows and columns you need. Both CSV and Excel formats are supported.
+- **See patterns in your data**: highlight cells based on their values to quickly spot trends, outliers, or important thresholds in your results.
+- **Smoother navigation**: click any row number to see the full details in an expanded view. Columns automatically resize to show your data clearly, and web links in your results are now clickable.
+
+As a result, working with large datasets is now faster and more intuitive. Whether you're exploring millions of rows or sharing results with your team, the new table keeps up with how you actually work with data.
+
+### Latest anthropic models added to SQL assistant
+
+Data mode's [SQL assistant](https://docs.timescale.com/getting-started/latest/run-queries-from-console/#sql-assistant) now supports Anthropic's latest models:
+
+- Sonnet 4
+- Sonnet 4 (extended thinking)
+- Opus 4
+- Opus 4 (extended thinking)
+
+### VPC support for passwordless data mode connections
+
+We previously made it much easier to connect newly created services to Timescale’s [data mode](https://docs.timescale.com/getting-started/latest/run-queries-from-console/#data-mode). We have now expanded this functionality to services using a VPC.
+
+## 🕵🏻️ Enhanced service monitoring, TimescaleDB v2.20, and livesync for Postgres
+
+
+### Updated top-level navigation - Monitoring tab
+
+In Timescale Console, we have consolidated multiple top-level service information tabs into the single Monitoring tab.
+This tab houses information previously displayed in the Recommendations, Jobs, Connections, Metrics, Logs,
+and `Insights` tabs.
+
+
+
+### Monitor active connections
+
+In the `Connections` section under `Monitoring`, you can now see information like the query being run, the application
+name, and duration for all current connections to a service.
+
+
+
+The information in `Connections` enables you to debug misconfigured applications, or
+cancel problematic queries to free up other connections to your database.
+
+### TimescaleDB v2.20 - query performance and faster data updates
+
+All new services created on Timescale Cloud are created using
+[TimescaleDB v2.20](https://github.com/timescale/timescaledb/releases/tag/2.20.0). Existing services will be
+automatically upgraded during their maintenance window.
+
+Highlighted features in TimescaleDB v2.20 include:
+* Efficiently handle data updates and upserts (including backfills, that are now up to 10x faster).
+* Up to 6x faster point queries on high-cardinality columns using new bloom filters.
+* Up to 2500x faster DISTINCT operations with SkipScan, perfect for quickly getting a unique list or the latest reading
+ from any device, event, or transaction.
+* 8x more efficient Boolean column storage with vectorized processing, resulting in 30-45% faster queries.
+* Enhanced developer flexibility with continuous aggregates now supporting window and mutable functions, plus
+ customizable refresh orders.
+
+### Postgres 13 and 14 deprecated on Tiger Cloud
+
+[TimescaleDB version 2.20](https://github.com/timescale/timescaledb/releases/tag/2.20.0) is not compatible with Postgres versions v14 and below.
+TimescaleDB 2.19.3 is the last bug-fix release for Postgres 14. Future fixes are for
+Postgres 15+ only. To continue receiving critical fixes and security patches, and to take
+advantage of the latest TimescaleDB features, you must upgrade to Postgres 15 or newer.
+This deprecation affects all Tiger Cloud services currently running Postgres 13 or
+Postgres 14.
+
+The timeline for the Postgres 13 and 14 deprecation is as follows:
+
+- **Deprecation notice period begins**: starting in early June 2025, you will receive email communication.
+- **Customer self-service upgrade window**: June 2025 through September 14, 2025. We strongly encourage you to
+ [manually upgrade Postgres](https://docs.tigerdata.com/use-timescale/latest/upgrades/#manually-upgrade-postgresql-for-a-service)
+ during this period.
+- **Automatic upgrade deadline**: your service will be
+ [automatically upgraded](https://docs.timescale.com/use-timescale/latest/upgrades/#automatic-postgresql-upgrades-for-a-service)
+ from September 15, 2025.
+
+### Enhancements to livesync for Postgres
+
+You now can:
+* Edit a running livesync to add and drop tables from an existing configuration:
+ - For existing tables, Timescale Console stops the livesync while keeping the target table intact.
+ - Newly added tables sync their existing data and transition into the Change Data Capture (CDC) state.
+* Create multiple livesync instances for Postgres per service. This is an upgrade from our initial launch which
+ limited users to one LiveSync per service.
+
+This enables you to sync data from multiple Postgres source databases into a single Timescale Cloud service.
+* No more hassle looking up schema and table names for livesync configuration from the source. Starting today, all
+ schema and table names are available in a dropdown menu for seamless source table selection.
+
+## ➕ More storage types and IOPS
+
+
+### 🚀 Enhanced storage: scale to 64 TB and 32,000 IOPS
+
+We're excited to introduce enhanced storage, a new storage type in Timescale Cloud that significantly boosts both capacity and performance. Designed for customers with mission-critical workloads.
+
+With enhanced storage, Timescale Cloud now supports:
+- Up to 64 TB of storage per Timescale Cloud service (4x increase from the previous limit)
+- Up to 32,000 IOPS, enabling high-throughput ingest and low-latency queries
+
+Powered by AWS io2 volumes, enhanced storage gives your workloads the headroom they need—whether you're building financial data pipelines, developing IoT platforms, or processing billions of rows of telemetry. No more worrying about storage ceilings or IOPS bottlenecks.
+Enable enhanced storage in Timescale Console under `Operations` → `Compute & Storage`. Enhanced storage is currently available on the Enterprise pricing plan only. [Learn more here](https://docs.timescale.com/use-timescale/latest/data-tiering/enabling-data-tiering/).
+
+
+
+## ↔️ New export and import options
+
+
+### 🔥 Ship TimescaleDB metrics to Prometheus
+
+We’re excited to release the Prometheus Exporter for Timescale Cloud, making it easy to ship TimescaleDB metrics to your Prometheus instance.
+With the Prometheus Exporter, you can:
+
+- Export TimescaleDB metrics like CPU, memory, and storage
+- Visualize usage trends with your own Grafana dashboards
+- Set alerts for high CPU load, low memory, or storage nearing capacity
+
+To get started, create a Prometheus Exporter in the Timescale Console, attach it to your service, and configure Prometheus to scrape from the exposed URL. Metrics are secured with basic auth.
+Available on Scale and Enterprise plans. [Learn more here](https://docs.timescale.com/use-timescale/latest/metrics-logging/metrics-to-prometheus/).
+
+
+
+### 📥 Import text files into Postgres tables
+Our import options in Timescale Console have expanded to include local text files. You can add the content of multiple text files (one file per row) into a Postgres table for use with Vectorizers while creating embeddings for evaluation and development. This new option is located in Service > Actions > Import Data.
+
+## 🤖 Automatic document embeddings from S3 and a sample dataset for AI testing
+
+
+### Automatic document embeddings from S3
+
+pgai vectorizer now supports automatic document vectorization. This makes it dramatically easier to build RAG and semantic search applications on top of unstructured data stored in Amazon S3. With just a SQL command, developers can create, update, and synchronize vector embeddings from a wide range of document formats—including PDFs, DOCX, XLSX, HTML, and more—without building or maintaining complex ETL pipelines.
+
+Instead of juggling multiple systems and syncing metadata, vectorizer handles the entire process: downloading documents from S3, parsing them, chunking text, and generating vector embeddings stored right in Postgres using pgvector. As documents change, embeddings stay up-to-date automatically—keeping your Postgres database the single source of truth for both structured and semantic data.
+
+
+
+### Sample dataset for AI testing
+
+You can now import a dataset directly from Hugging Face using Timescale Console. This dataset is ideal for testing vectorizers, you find it in the Import Data page under the Service > Actions tab.
+
+
+
+## 🔁 Livesync for S3 and passwordless connections for data mode
+
+
+### Livesync for S3 (beta)
+
+[Livesync for S3](https://docs.timescale.com/migrate/latest/livesync-for-s3/) is our second livesync offering in
+Timescale Console, following livesync for Postgres. This feature helps users sync data in their S3 buckets to a
+Timescale Cloud service, and simplifies data importing. Livesync handles both existing and new data in real time,
+automatically syncing everything into a Timescale Cloud service. Users can integrate Timescale Cloud alongside S3, where
+S3 stores data in raw form as the source for multiple destinations.
+
+
+
+With livesync, users can connect Timescale Cloud with S3 in minutes, rather than spending days setting up and maintaining
+an ingestion layer.
+
+
+
+### UX improvements to livesync for Postgres
+
+In [livesync for Postgres](https://docs.timescale.com/migrate/latest/livesync-for-postgresql/), getting started
+requires setting the `WAL_LEVEL` to `logical`, and granting specific permissions to start a publication
+on the source database. To simplify this setup process, we have added a detailed two-step checklist with comprehensive
+configuration instructions to Timescale Console.
+
+
+
+### Passwordless data mode connections
+
+We’ve made connecting to your Timescale Cloud services from [data mode](https://docs.timescale.com/getting-started/latest/run-queries-from-console/#connect-to-your-timescale-cloud-service-in-the-data-mode)
+in Timescale Console even easier! All new services created in Timescale Cloud are now automatically accessible from
+data mode without requiring you to enter your service credentials. Just open data mode, select your service, and
+start querying.
+
+
+
+We will be expanding this functionality to existing services in the coming weeks (including services using VPC peering),
+so stay tuned.
+
+## ☑️ Embeddings spot checks, TimescaleDB v2.19.3, and new models in SQL Assistant
+
+
+### Embeddings spot checks
+
+In Timescale Cloud, you can now quickly check the quality of the embeddings from the vectorizers' outputs. Construct a similarity search query with additional filters on source metadata using a simple UI. Run the query right away, or copy it to the SQL editor or data mode and further customize it to your needs. Run the check in Timescale Console > `Services` > `AI`:
+
+
+
+### TimescaleDB v2.19.3
+
+New services created in Timescale Cloud now use TimescaleDB v2.19.3. Existing services are in the process of being automatically upgraded to this version.
+
+This release adds a number of bug fixes including:
+
+- Fix segfault when running a query against columnstore chunks that group by multiple columns, including UUID segmentby columns.
+- Fix hypercore table access method segfault on DELETE operations using a segmentby column.
+
+### New OpenAI, Llama, and Gemini models in SQL Assistant
+
+The data mode's SQL Assistant now includes support for the latest models from OpenAI and Llama: GPT-4.1 (including mini and nano) and Llama 4 (Scout and Maverick). Additionally, we've added support for Gemini models, in particular Gemini 2.0 Nano and 2.5 Pro (experimental and preview). With the new additions, SQL Assistant supports more than 20 language models so you can select the one best suited to your needs.
+
+
+
+## 🪵 TimescaleDB v2.19, new service overview page, and log improvements
+
+
+### TimescaleDB v2.19—query performance and concurrency improvements
+
+Starting this week, all new services created on Timescale Cloud use [TimescaleDB v2.19](https://github.com/timescale/timescaledb/releases/tag/2.19.0). Existing services will be upgraded gradually during their maintenance window.
+
+Highlighted features in TimescaleDB v2.19 include:
+
+- Improved concurrency of `INSERT`, `UPDATE`, and `DELETE` operations on the columnstore by no longer blocking DML statements during the recompression of a chunk.
+- Improved system performance during continuous aggregate refreshes by breaking them into smaller batches. This reduces systems pressure and minimizes the risk of spilling to disk.
+- Faster and more up-to-date results for queries against continuous aggregates by materializing the most recent data first, as opposed to old data first in prior versions.
+- Faster analytical queries with SIMD vectorization of aggregations over text columns and `GROUP BY` over multiple columns.
+- Enable chunk size optimization for better query performance in the columnstore by merging them with `merge_chunk`.
+
+### New service overview page
+
+The service overview page in Timescale Console has been overhauled to make it simpler and easier to use. Navigate to the `Overview` tab for any of your services and you will find an architecture diagram and general information pertaining to it. You may also see recommendations at the top, for how to optimize your service.
+
+
+
+To leave the product team your feedback, open `Help & Support` on the left and select `Send feedback to the product team`.
+
+Finding logs just got easier! We've added a date, time, and timezone picker, so you can jump straight to the exact moment you're interested in—no more endless scrolling.
+
+
+
+## 📒Faster vector search and improved job information
+
+
+### pgvectorscale 0.7.0: faster filtered filtered vector search with filtered indexes
+
+This pgvectorscale release adds label-based filtered vector search to the StreamingDiskANN index.
+This enables you to return more precise and efficient results by combining vector
+similarity search with label filtering while still uitilizing the ANN index. This is a common need for large-scale RAG and Agentic applications
+that rely on vector searches with metadata filters to return relevant results. Filtered indexes add
+even more capabilities for filtered search at scale, complementing the high accuracy streaming filtering already
+present in pgvectorscale. The implementation is inspired by Microsoft's Filtered DiskANN research.
+For more information, see the [pgvectorscale release notes][log-28032025-pgvectorscale-rn] and a
+[usage example][log-28032025-pgvectorscale-example].
+
+### Job errors and individual job pages
+
+Each job now has an individual page in Timescale Console, and displays additional details about job errors. You use
+this information to debug failing jobs.
+
+To see the job information page, in [Timescale Console][console], select the service to check, then click `Jobs` > job ID to investigate.
+
+
+
+- Unsuccessful jobs with errors:
+
+
+
+## 🤩 In-Console Livesync for Postgres
+
+
+You can now set up an active data ingestion pipeline with livesync for Postgres in Timescale Console. This tool enables you to replicate your source database tables into Timescale's hypertables indefinitely. Yes, you heard that right—keep livesync running for as long as you need, ensuring that your existing source Postgres tables stay in sync with Timescale Cloud. Read more about setting up and using [Livesync for Postgres](https://docs.timescale.com/migrate/latest/livesync-for-postgresql/).
+
+
+
+
+
+
+
+
+
+## 💾 16K dimensions on pgvectorscale plus new pgai Vectorizer support
+
+
+### pgvectorscale 0.6 — store up to 16K dimension embeddings
+
+pgvectorscale 0.6.0 now supports storing vectors with up to 16,000 dimensions, removing the previous limitation of 2,000 from pgvector. This lets you use larger embedding models like OpenAI's text-embedding-3-large (3072 dim) with Postgres as your vector database. This release also includes key performance and capability enhancements, including NEON support for SIMD distance calculations on aarch64 processors, improved inner product distance metric implementation, and improved index statistics. See the release details [here](https://github.com/timescale/pgvectorscale/releases/tag/0.6.0).
+
+### pgai Vectorizer supports models from AWS Bedrock, Azure AI, Google Vertex via LiteLLM
+
+Access embedding models from popular cloud model hubs like AWS Bedrock, Azure AI Foundry, Google Vertex, as well as HuggingFace and Cohere as part of the LiteLLM integration with pgai Vectorizer. To use these models with pgai Vectorizer on Timescale Cloud, select `Other` when adding the API key in the credentials section of Timescale Console.
+
+## 🤖 Agent Mode for PopSQL and more
+
+
+### Agent Mode for PopSQL
+
+Introducing Agent Mode, a new feature in Timescale Console SQL Assistant. SQL Assistant lets you query your database using natural language. However, if you ran into errors, you have to approve the implementation of the Assistant's suggestions.
+
+With Agent Mode on, SQL Assistant automatically adjusts and executes your query without intervention. It runs, diagnoses, and fixes any errors that it runs into until you get your desired results.
+
+Below you can see SQL Assistant run into an error, identify the resolution, execute the fixed query, display results, and even change the title of the query:
+
+
+
+To use Agent Mode, make sure you have SQL Assistant enabled, then click on the model selector dropdown, and tick the `Agent Mode` checkbox.
+
+### Improved AWS Marketplace integration for a smoother experience
+
+We've enhanced the AWS Marketplace workflow to make your experience even better! Now, everything is fully automated,
+ensuring a seamless process from setup to billing. If you're using the AWS Marketplace integration, you'll notice a
+smoother transition and clearer billing visibility—your Timescale Cloud subscription will be reflected directly in AWS
+Marketplace!
+
+### Timescale Console recommendations
+
+Sometimes it can be hard to know if you are getting the best use out of your service. To help with this, Timescale
+Cloud now provides recommendations based on your service's context, assisting with onboarding or notifying if there is a configuration concern with your service, such as consistently failing jobs.
+
+To start, recommendations are focused primarily on onboarding or service health, though we will regularly add new ones. You can see if you have any existing recommendations for your service by going to the `Actions` tab in Timescale Console.
+
+
+
+## 🛣️ Configuration Options for Secure Connections and More
+
+
+### Edit VPC and AWS Transit Gateway CIDRs
+
+You can now modify the CIDRs blocks for your VPC or Transit Gateway directly from Timescale Console, giving you greater control over network access and security. This update makes it easier to adjust your private networking setup without needing to recreate your VPC or contact support.
+
+
+
+### Improved log filtering
+
+We’ve enhanced the `Logs` screen with the new `Warning` and `Log` filters to help you quickly find the logs you need. These additions complement the existing `Fatal`, `Error`, and `Detail` filters, making it easier to pinpoint specific events and troubleshoot issues efficiently.
+
+
+
+### TimescaleDB v2.18.2 on Timescale Cloud
+
+New services created in Timescale Cloud now use [TimescaleDB v2.18.2](https://github.com/timescale/timescaledb/releases/tag/2.18.2). Existing services are in the process of being automatically upgraded to this version.
+
+This new release fixes a number of bugs including:
+
+- Fix `ExplainHook` breaking the call chain.
+- Respect `ExecutorStart` hooks of other extensions.
+- Block dropping internal compressed chunks with `drop_chunk()`.
+
+### SQL Assistant improvements
+
+- Support for Claude 3.7 Sonnet and extended thinking including reasoning tokens.
+- Ability to abort SQL Assistant requests while the response is streaming.
+
+## 🤖 SQL Assistant Improvements and Pgai Docs Reorganization
+
+
+### New models and improved UX for SQL Assistant
+
+We have added fireworks.ai and Groq as service providers, and several new LLM options for SQL Assistant:
+
+- OpenAI o1
+- DeepSeek R1
+- Llama 3.3 70B
+- Llama 3.1 405B
+- DeepSeek R1 Distill - Llama 3.3
+
+We've also improved the model picker by adding descriptions for each model:
+
+
+
+### Updated and reorganized docs for pgai
+
+We have improved the GitHub docs for pgai. Now relevant sections have been grouped into their own folders and we've created a comprehensive summary doc. Check it out [here](https://github.com/timescale/pgai/tree/main/docs).
+
+## 💘 TimescaleDB v2.18.1 and AWS Transit Gateway Support Generally Available
+
+
+### TimescaleDB v2.18.1
+New services created in Timescale Cloud now use [TimescaleDB v2.18.1](https://github.com/timescale/timescaledb/releases/tag/2.18.1). Existing services will be automatically upgraded in their next maintenance window starting next week.
+
+This new release includes a number of bug fixes and small improvements including:
+
+* Faster columnar scans when using the hypercore table access method
+* Ensure all constraints are always applied when deleting data on the columnstore
+* Pushdown all filters on scans for UPDATE/DELETE operations on the columnstore
+
+### AWS Transit Gateway support is now generally available!
+
+Timescale Cloud now fully supports [AWS Transit Gateway](https://docs.timescale.com/use-timescale/latest/security/transit-gateway/), making it even easier to securely connect your database to multiple VPCs across different environments—including AWS, on-prem, and other cloud providers.
+
+With this update, you can establish a peering connection between your Timescale Cloud services and an AWS Transit Gateway in your AWS account. This keeps your Timescale Cloud services safely behind a VPC while allowing seamless access across complex network setups.
+
+## 🤖 TimescaleDB v2.18 and SQL Assistant Improvements in Data Mode and PopSQL
+
+
+
+### TimescaleDB v2.18 - dense indexes in the columnstore and query vectorization improvements
+Starting this week, all new services created on Timescale Cloud use [TimescaleDB v2.18](https://github.com/timescale/timescaledb/releases/tag/2.18.0). Existing services will be upgraded gradually during their maintenance window.
+
+Highlighted features in TimescaleDB v2.18.0 include:
+
+* The ability to add dense indexes (btree and hash) to the columnstore through the new hypercore table access method.
+* Significant performance improvements through vectorization (SIMD) for aggregations using a group by with one column and/or using a filter clause when querying the columnstore.
+* Hypertables support triggers for transition tables, which is one of the most upvoted community feature requests.
+* Updated methods to manage Timescale's hybrid row-columnar store (hypercore). These methods highlight columnstore usage. The columnstore includes an optimized columnar format as well as compression.
+
+### SQL Assistant improvements
+
+We made a few improvements to SQL Assistant:
+
+**Dedicated SQL Assistant threads** 🧵
+
+Each query, notebook, and dashboard now gets its own conversation thread, keeping your chats organized.
+
+
+
+**Delete messages** ❌
+
+Made a typo? Asked the wrong question? You can now delete individual messages from your thread to keep the conversation clean and relevant.
+
+
+
+**Support for OpenAI `o3-mini` ⚡**
+
+We’ve added support for OpenAI’s latest `o3-mini` model, bringing faster response times and improved reasoning for SQL queries.
+
+
+
+## 🌐 IP Allowlists in Data Mode and PopSQL
+
+
+
+For enhanced network security, you can now also create IP allowlists in the Timescale Console data mode and PopSQL. Similarly to the [ops mode IP allowlists][ops-mode-allow-list], this feature grants access to your data only to certain IP addresses. For example, you might require your employees to use a VPN and add your VPN static egress IP to the allowlist.
+
+This feature is available in:
+
+- [Timescale Console][console] data mode, for all pricing tiers
+- [PopSQL web][popsql-web]
+- [PopSQL desktop][popsql-desktop]
+
+Enable this feature in PopSQL/Timescale Console data mode > `Project` > `Settings` > `IP Allowlist`:
+
+
+
+## 🤖 pgai Extension and Python Library Updates
+
+
+### AI — pgai Postgres extension 0.7.0
+This release enhances the Vectorizer functionality by adding configurable `base_url` support for OpenAI API. This enables pgai Vectorizer to use all OpenAI-compatible models and APIs via the OpenAI integration simply by changing the `base_url`. This release also includes public granting of vectorizers, superuser creation on any table, an upgrade to the Ollama client to 0.4.5, a new `docker-start` command, and various fixes for struct handling, schema qualification, and system package management. [See all changes on Github](https://github.com/timescale/pgai/releases/tag/extension-0.7.0).
+
+### AI - pgai python library 0.5.0
+This release adds comprehensive SQLAlchemy and Alembic support for vector embeddings, including operations for migrations and improved model inheritance patterns. You can now seamlessly integrate vector search capabilities with SQLAlchemy models while utilizing Alembic for database migrations. This release also adds key improvements to the Ollama integration and self-hosted Vectorizer configuration. [See all changes on Github](https://github.com/timescale/pgai/releases/tag/pgai-v0.5.0).
+
+## AWS Transit Gateway Support
+
+
+### AWS Transit Gateway Support (Early Access)
+Timescale Cloud now enables you to connect to your Timescale Cloud services through AWS Transit Gateway. This feature is available to Scale and Enterprise customers. It will be in Early Access for a short time and available in the Timescale Console very soon. If you are interested in implementing this Early Access Feature, reach out to your Rep.
+
+## 🇮🇳 New region in India, Postgres 17 upgrades, and TimescaleDB on AWS Marketplace
+
+
+### Welcome India! (Support for a new region: Mumbai)
+Timescale Cloud now supports the Mumbai region. Starting today, you can run Timescale Cloud services in Mumbai, bringing our database solutions closer to users in India.
+
+### Postgres major version upgrades to PG 17
+Timescale Cloud services can now be upgraded directly to Postgres 17 from versions 14, 15, or 16. Users running versions 12 or 13 must first upgrade to version 15 or 16, before upgrading to 17.
+
+### Timescale Cloud available on AWS Marketplace
+Timescale Cloud is now available in the [AWS Marketplace][aws-timescale]. This allows you to keep billing centralized on your AWS account, use your already committed AWS Enterprise Discount Program spend to pay your Timescale Cloud bill and simplify procurement and vendor management.
+
+## 🎅 Postgres 17, feature requests, and Postgres Livesync
+
+
+### Postgres 17
+All new Timescale Cloud services now come with Postgres 17.2, the latest version. Upgrades to Postgres 17 for services running on prior versions will be available in January.
+Postgres 17 adds new capabilities and improvements to Timescale like:
+* **System-wide Performance Improvements**. Significant performance boosts, particularly in high-concurrency workloads. Enhancements in the I/O layer, including improved Write-Ahead Log (WAL) processing, can result in up to a 2x increase in write throughput under heavy loads.
+* **Enhanced JSON Support**. The new JSON_TABLE allows developers to convert JSON data directly into relational tables, simplifying the integration of JSON and SQL. The release also adds new SQL/JSON constructors and query functions, offering powerful tools to manipulate and query JSON data within a traditional relational schema.
+* **More Flexible MERGE Operations**. The MERGE command now includes a RETURNING clause, making it easier to track and work with modified data. You can now also update views using MERGE, unlocking new use cases for complex queries and data manipulation.
+
+### Submit feature requests from Timescale Console
+You can now submit feature requests directly from Console and see the list of feature requests you have made. Just click on `Feature Requests` on the right sidebar.
+All feature requests are automatically published to the [Timescale Forum](https://www.timescale.com/forum/c/cloud-feature-requests/39) and are reviewed by the product team, providing more visibility and transparency on their status as well as allowing other customers to vote for them.
+
+
+
+### Postgres Livesync (Alpha release)
+We have built a new solution that helps you continuously replicate all or some of your Postgres tables directly into Timescale Cloud.
+
+[Livesync](https://docs.timescale.com/migrate/latest/livesync-for-postgresql/) allows you to keep a current Postgres instance such as RDS as your primary database, and easily offload your real-time analytical queries to Timescale Cloud to boost their performance. If you have any questions or feedback, talk to us in [#livesync in Timescale Community](https://app.slack.com/client/T4GT3N2JK/C086NU9EZ88).
+
+This is just the beginning—you'll see more from livesync in 2025!
+
+## In-Console import from S3, I/O Boost, and Jobs Explorer
+
+
+### In-Console import from S3 (CSV and Parquet files)
+
+Connect your S3 buckets to import data into Timescale Cloud. We support CSV (including `.zip` and `.gzip`) and Parquet files, with a 10 GB size limit in this initial release. This feature is accessible in the `Import your data` section right after service creation and through the `Actions` tab.
+
+
+
+
+
+### Self-Serve I/O Boost 📈
+
+I/O Boost is an add-on for customers on Scale or Enterprise tiers that maximizes the I/O capacity of EBS storage to 16,000 IOPS and 1,000 MBps throughput per service. To enable I/O Boost, navigate to `Services` > `Operations` in Timescale Console. A simple toggle allows you to enable the feature, with pricing clearly displayed at $0.41/hour per node.
+
+
+
+See all the jobs associated with your service through a new `Jobs` tab. You can see the type of job, its status (`Running`, `Paused`, and others), and a detailed history of the last 100 runs, including success rates and runtime statistics.
+
+
+
+
+
+## 🛝 New service creation flow
+
+
+- **AI and Vector:** the UI now lets you choose an option for creating AI and Vector-ready services right from the start. You no longer need to add the pgai, pgvector, and pgvectorscale extensions manually. You can combine this with time-series capabilities as well!
+
+
+
+- **Compute size recommendations:** new (and old) users were sometimes unsure about what compute size to use for their workload. We now offer compute size recommendations based on how much data you plan to have in your service.
+
+
+
+- **More information about configuration options:** we've made it clearer what each configuration option does, so that you can make more informed choices about how you want your service to be set up.
+
+## 🗝️ IP Allow Lists!
+
+
+IP Allow Lists let you specify a list of IP addresses that have access to your Timescale Cloud services and block any others. IP Allow Lists are a
+lightweight but effective solution for customers concerned with security and compliance. They enable
+you to prevent unauthorized connections without the need for a [Virtual Private Cloud (VPC)](https://docs.timescale.com/use-timescale/latest/security/vpc/).
+
+To get started, in [Timescale Console](https://console.cloud.timescale.com/), select a service, then click
+**Operations** > **Security** > **IP Allow List**, then create an IP Allow List.
+
+
+
+For more information, [see our docs](https://docs.timescale.com/use-timescale/latest/security/ip-allow-list/).
+
+## 🤩 SQL Assistant, TimescaleDB v2.17, HIPAA compliance, and better logging
+
+
+### 🤖 New AI companion: SQL Assistant
+
+SQL Assistant uses AI to help you write SQL faster and more accurately.
+
+- **Real-time help:** chat with models like OpenAI 4o and Claude 3.5 Sonnet to get help writing SQL. Describe what you want in natural language and have AI write the SQL for you.
+
+
+
+
+
+- **Error resolution**: when you run into an error, SQL Assistant proposes a recommended fix that you can choose to accept.
+
+
+
+- **Generate titles and descriptions**: click a button and SQL Assistant generates a title and description for your query. No more untitled queries!
+
+
+
+See our [blog post](https://www.tigerdata.com/blog/postgres-gui-sql-assistant/) or [docs](https://docs.tigerdata.com/getting-started/latest/run-queries-from-console/#sql-assistant) for full details!
+
+### 🏄 TimescaleDB v2.17 - performance improvements for analytical queries and continuous aggregate refreshes
+
+Starting this week, all new services created on Timescale Cloud use [TimescaleDB v2.17](https://github.com/timescale/timescaledb/releases/tag/2.17.0). Existing services are upgraded gradually during their maintenance windows.
+
+TimescaleDB v2.17 significantly improves the performance of [continuous aggregate refreshes](https://docs.timescale.com/use-timescale/latest/continuous-aggregates/refresh-policies/), and contains performance improvements for [analytical queries and delete operations](https://docs.timescale.com/use-timescale/latest/compression/modify-compressed-data/) over compressed hypertables.
+
+Best practice is to upgrade at the next available opportunity.
+
+Highlighted features in TimescaleDB v2.17 are:
+
+* Significant performance improvements for continuous aggregate policies:
+
+* Continuous aggregate refresh now uses `merge` instead of deleting old materialized data and re-inserting.
+
+* Continuous aggregate policies are now more lightweight, use less system resources, and complete faster. This update:
+
+* Decreases dramatically the amount of data that must be written on the continuous aggregate in the presence of a small number of changes
+ * Reduces the i/o cost of refreshing a continuous aggregate
+ * Generates fewer Write-Ahead Logs (`WAL`)
+
+* Increased performance for real-time analytical queries over compressed hypertables:
+
+* We are excited to introduce additional Single Instruction, Multiple Data (SIMD) vectorization optimization to TimescaleDB. This release supports vectorized execution for queries that _group by_ using the `segment_by` column(s), and _aggregate_ using the `sum`, `count`, `avg`, `min`, and `max` basic aggregate functions.
+
+* Stay tuned for more to come in follow-up releases! Support for grouping on additional columns, filtered aggregation, vectorized expressions, and `time_bucket` is coming soon.
+
+* Improved performance of deletes on compressed hypertables when a large amount of data is affected.
+
+This improvement speeds up operations that delete whole segments by skipping the decompression step. It is enabled for all deletes that filter by the `segment_by` column(s).
+
+Timescale Cloud's [Enterprise plan](https://docs.timescale.com/about/latest/pricing-and-account-management/#features-included-in-each-pricing-plan) is now HIPAA (Health Insurance Portability and Accountability Act) compliant. This allows organizations to securely manage and analyze sensitive healthcare data, ensuring they meet regulatory requirements while building compliant applications.
+
+### Expanded logging within Timescale Console
+
+Customers can now access more than just the most recent 500 logs within the Timescale Console. We've updated the user experience, including scrollbar with infinite scrolling capabilities.
+
+
+
+## ✨ Connect to Timescale from .NET Stack and check status of recent jobs
+
+
+### Connect to Timescale with your .NET stack
+We've added instructions for connecting to Timescale using your .NET workflow. In Console after service creation, or in the **Actions** tab, you can now select .NET from the developer library list. The guide demonstrates how to use Npgsql to integrate Timescale with your existing software stack.
+
+
+
+### ✅ Last 5 jobs status
+In the **Jobs** section of the **Explorer**, users can now see the status (completed/failed) of the last 5 runs of each job.
+
+
+
+## 🎃 New AI, data integration, and performance enhancements
+
+
+### Pgai Vectorizer: vector embeddings as database indexes (early access)
+This early access feature enables you to automatically create, update, and maintain embeddings as your data changes. Just like an index, Timescale handles all the complexity: syncing, versioning, and cleanup happen automatically.
+This means no manual tracking, zero maintenance burden, and the freedom to rapidly experiment with different embedding models and chunking strategies without building new pipelines.
+Navigate to the AI tab in your service overview and follow the instructions to add your OpenAI API key and set up your first vectorizer or read our [guide to automate embedding generation with pgai Vectorizer](https://github.com/timescale/pgai/blob/main/docs/vectorizer/overview.md) for more details.
+
+
+
+### Postgres-to-Postgres foreign data wrappers:
+Fetch and query data from multiple Postgres databases, including time-series data in hypertables, directly within Timescale Cloud using [foreign data wrappers (FDW)](https://docs.timescale.com/use-timescale/latest/schema-management/foreign-data-wrappers/). No more complicated ETL processes or external tools—just seamless integration right within your SQL editor. This feature is ideal for developers who manage multiple Postgres and time-series instances and need quick, easy access to data across databases.
+
+### Faster queries over tiered data
+This release adds support for runtime chunk exclusion for queries that need to access [tiered storage](https://docs.timescale.com/use-timescale/latest/data-tiering/). Chunk exclusion now works with queries that use stable expressions in the `WHERE` clause. The most common form of this type of query is:
+
+For more info on queries with immutable/stable/volatile filters, check our blog post on [Implementing constraint exclusion for faster query performance](https://www.timescale.com/blog/implementing-constraint-exclusion-for-faster-query-performance/).
+
+If you no longer want to use tiered storage for a particular hypertable, you can now disable tiering and drop the associated tiering metadata on the hypertable with a call to [disable_tiering function](https://docs.timescale.com/use-timescale/latest/data-tiering/enabling-data-tiering/#disable-tiering).
+
+### Chunk interval recommendations
+Timescale Console now shows recommendations for services with too many small chunks in their hypertables.
+Recommendations for new intervals that improve service performance are displayed for each underperforming service and hypertable. Users can then change their chunk interval and boost performance within Timescale Console.
+
+
+
+## 💡 Help with hypertables and faster notebooks
+
+
+### 🧙Hypertable creation wizard
+After creating a service, users can now create a hypertable directly in Timescale Console by first creating a table, then converting it into a hypertable. This is possible using the in-console SQL editor. All standard hypertable configuration options are supported, along with any customization of the underlying table schema.
+
+
+### 🍭 PopSQL Notebooks
+The newest version of Data Mode Notebooks is now waaaay faster. Why? We've incorporated the newly developed v3 of our query engine that currently powers Timescale Console's SQL Editor. Check out the difference in query response times.
+
+## ✨ Production-Ready Low-Downtime Migrations, MySQL Import, Actions Tab, and Current Lock Contention Visibility in SQL Editor
+
+
+### 🏗️ Live Migrations v1.0 Release
+
+Last year, we began developing a solution for low-downtime migration from Postgres and TimescaleDB. Since then, this solution has evolved significantly, featuring enhanced functionality, improved reliability, and performance optimizations. We're now proud to announce that **live migration is production-ready** with the release of version 1.0.
+
+Many of our customers have successfully migrated databases to Timescale using [live migration](https://docs.timescale.com/migrate/latest/live-migration/), with some databases as large as a few terabytes in size.
+
+As part of the service creation flow, we offer the following:
+
+- Connect to services from different sources
+- Import and migrate data from various sources
+- Create hypertables
+
+Previously, these actions were only visible during the service creation process and couldn't be accessed later. Now, these actions are **persisted within the service**, allowing users to leverage them on-demand whenever they're ready to perform these tasks.
+
+
+
+### 🧭 Import Data from MySQL
+
+We've noticed users struggling to convert their MySQL schema and data into their Timescale Cloud services. This was due to the semantic differences between MySQL and Postgres. To simplify this process, we now offer **easy-to-follow instructions** to import data from MySQL to Timescale Cloud. This feature is available as part of the data import wizard, under the **Import from MySQL** option.
+
+
+
+### 🔐 Current Lock Contention
+
+In Timescale Console, we offer the SQL editor so you can directly querying your service. As a new improvement, **if a query is waiting on locks and can't complete execution**, Timescale Console now displays the current lock contention in the results section .
+
+
+
+## CIDR & VPC Updates
+
+
+
+Timescale now supports multiple CIDRs on the customer VPC. Customers who want to take advantage of multiple CIDRs will need to recreate their peering.
+
+## 🤝 New modes in Timescale Console: Ops and Data mode, and Console based Parquet File Import
+
+
+
+We've been listening to your feedback and noticed that Timescale Console users have diverse needs. Some of you are focused on operational tasks like adding replicas or changing parameters, while others are diving deep into data analysis to gather insights.
+
+**To better serve you, we've introduced new modes to the Timescale Console UI—tailoring the experience based on what you're trying to accomplish.**
+
+Ops mode is where you can manage your services, add replicas, configure compression, change parameters, and so on.
+
+Data mode is the full PopSQL experience: write queries with autocomplete, visualize data with charts and dashboards, schedule queries and dashboards to create alerts or recurring reports, share queries and dashboards, and more.
+
+Try it today and let us know what you think!
+
+
+
+## Console based Parquet File Import
+
+Now users can upload from Parquet to Timescale Cloud by uploading the file from their local file system. For files larger than 250 MB, or if you want to do it yourself, follow the three-step process to upload Parquet files to Timescale.
+
+
+
+### SQL editor improvements
+
+* In the Ops mode SQL editor, you can now highlight a statement to run a specific statement.
+
+## High availability, usability, and migrations improvements
+
+
+### Multiple HA replicas
+
+Scale and Enterprise customers can now configure two new multiple high availability (HA) replica options directly through Timescale Console:
+
+* Two HA replicas (both asynchronous) - our highest availability configuration.
+* Two HA replicas (one asynchronous, one synchronous) - our highest data integrity configuration.
+
+Previously, Timescale offered only a single synchronous replica for customers seeking high availability. The single HA option is still available.
+
+
+
+
+
+For more details on multiple HA replicas, see [Manage high availability](https://docs.timescale.com/use-timescale/latest/ha-replicas/high-availability/).
+
+### Other improvements
+
+* In the Console SQL editor, we now indicate if your database session is healthy or has been disconnected. If it's been disconnected, the session will reconnect on your next query execution.
+
+
+
+* Released live-migration v0.0.26 and then v0.0.27 which includes multiple performance improvements and bugfixes as well as better support for Postgres 12.
+
+## One-click SQL statement execution from Timescale Console, and session support in the SQL editor
+
+
+### One-click SQL statement execution from Timescale Console
+
+Now you can simply click to run SQL statements in various places in the Console. This requires that the [SQL Editor][sql-editor] is enabled for the service.
+
+* Enable Continuous Aggregates from the CAGGs wizard by clicking **Run** below the SQL statement.
+
+
+* Enable database extensions by clicking **Run** below the SQL statement.
+
+
+* Query data instantly with a single click in the Console after successfully uploading a CSV file.
+
+
+### Session support in the SQL editor
+
+Last week we announced the new in-console SQL editor. However, there was a limitation where a new database session was created for each query execution.
+
+Today we removed that limitation and added support for keeping one database session for each user logged in, which means you can do things like start transactions:
+
+Or work with temporary tables:
+
+Or use the `set` command:
+
+## 😎 Query your database directly from the Console and enhanced data import and migration options
+
+
+### SQL Editor in Timescale Console
+We've added a new tab to the service screen that allows users to query their database directly, without having to leave the console interface.
+
+* For existing services on Timescale, this is an opt-in feature. For all newly created services, the SQL Editor will be enabled by default.
+* Users can disable the SQL Editor at any time by toggling the option under the Operations tab.
+* The editor supports all DML and DDL operations (any single-statement SQL query), but doesn't support multiple SQL statements in a single query.
+
+
+
+### Enhanced Data Import Options for Quick Evaluation
+After service creation, we now offer a dedicated section for data import, including options to import from Postgres as a source or from CSV files.
+
+The enhanced Postgres import instructions now offer several options: single table import, schema-only import, partial data import (allowing selection of a specific time range), and complete database import. Users can execute any of these data imports with just one or two simple commands provided in the data import section.
+
+
+
+### Improvements to Live migration
+We've released v0.0.25 of Live migration that includes the following improvements:
+* Support migrating tsdb on non public schema to public schema
+* Pre-migration compatibility checks
+* Docker compose build fixes
+
+## 🛠️ Improved tooling in Timescale Cloud and new AI and Vector extension releases
+
+
+### CSV import
+We have added a CSV import tool to the Timescale Console. For all TimescaleDB services, after service creation you can:
+* Choose a local file
+* Select the name of the data collection to be uploaded (default is file name)
+* Choose data types for each column
+* Upload the file as a new hypertable within your service
+Look for the `Import data from .csv` tile in the `Import your data` step of service creation.
+
+
+
+### Replica lag
+Customers now have more visibility into the state of replicas running on Timescale Cloud. We’ve released a new parameter called Replica Lag within the Service Overview for both Read and High Availability Replicas. Replica lag is measured in bytes against the current state of the primary database. For questions or concerns about the relative lag state of your replica, reach out to Customer Support.
+
+
+
+### Adjust chunk interval
+Customers can now adjust their chunk interval for their hypertables and continuous aggregates through the Timescale UI. In the Explorer, select the corresponding hypertable you would like to adjust the chunk interval for. Under *Chunk information*, you can change the chunk interval. Note that this only changes the chunk interval going forward, and does not retroactively change existing chunks.
+
+
+
+### CloudWatch permissions via role assumption
+We've released permission granting via role assumption to CloudWatch. Role assumption is both more secure and more convenient for customers who no longer need to rotate credentials and update their exporter config.
+
+For more details take a look at [our documentation][integrations].
+
+
+
+### Two-factor authentication (2FA) indicator
+We’ve added a 2FA status column to the Members page, allowing customers to easily see whether each project member has 2FA enabled or disabled.
+
+
+
+### Anthropic and Cohere integrations in pgai
+The pgai extension v0.3.0 now supports embedding creation and LLM reasoning using models from Anthropic and Cohere. For details and examples, see [this post for pgai and Cohere](https://www.timescale.com/blog/build-search-and-rag-systems-on-postgresql-using-cohere-and-pgai/), and [this post for pgai and Anthropic](https://www.timescale.com/blog/use-anthropic-claude-sonnet-3-5-in-postgresql-with-pgai/).
+
+### pgvectorscale extension: ARM builds and improved recall for low dimensional vectors
+pgvectorscale extension [v0.3.0](https://github.com/timescale/pgvectorscale/releases/tag/0.3.0) adds support for ARM processors and improves recall when using StreamingDiskANN indexes with low dimensionality vectors. We recommend updating to this version if you are self-hosting.
+
+## 🏄 Optimizations for compressed data and extended join support in continuous aggregates
+
+
+TimescaleDB v2.16.0 contains significant performance improvements when working with compressed data, extended join
+support in continuous aggregates, and the ability to define foreign keys from regular tables towards hypertables.
+We recommend upgrading at the next available opportunity.
+
+Any new service created on Timescale Cloud starting today uses TimescaleDB v2.16.0.
+
+In TimescaleDB v2.16.0 we:
+
+* Introduced multiple performance focused optimizations for data manipulation operations (DML) over compressed chunks.
+
+Improved upsert performance by more than 100x in some cases and more than 500x in some update/delete scenarios.
+
+* Added the ability to define chunk skipping indexes on non-partitioning columns of compressed hypertables.
+
+TimescaleDB v2.16.0 extends chunk exclusion to use these skipping (sparse) indexes when queries filter on the relevant columns,
+ and prune chunks that do not include any relevant data for calculating the query response.
+
+* Offered new options for use cases that require foreign keys defined.
+
+You can now add foreign keys from regular tables towards hypertables. We have also removed
+ some really annoying locks in the reverse direction that blocked access to referenced tables
+ while compression was running.
+
+* Extended Continuous Aggregates to support more types of analytical queries.
+
+More types of joins are supported, additional equality operators on join clauses, and
+ support for joins between multiple regular tables.
+
+**Highlighted features in this release**
+
+* Improved query performance through chunk exclusion on compressed hypertables.
+
+You can now define chunk skipping indexes on compressed chunks for any column with one of the following
+ integer data types: `smallint`, `int`, `bigint`, `serial`, `bigserial`, `date`, `timestamp`, `timestamptz`.
+
+After calling `enable_chunk_skipping` on a column, TimescaleDB tracks the min and max values for
+ that column, using this information to exclude chunks for queries filtering on that
+ column, where no data would be found.
+
+* Improved upsert performance on compressed hypertables.
+
+By using index scans to verify constraints during inserts on compressed chunks, TimescaleDB speeds
+ up some ON CONFLICT clauses by more than 100x.
+
+* Improved performance of updates, deletes, and inserts on compressed hypertables.
+
+By filtering data while accessing the compressed data and before decompressing, TimescaleDB has
+ improved performance for updates and deletes on all types of compressed chunks, as well as inserts
+ into compressed chunks with unique constraints.
+
+By signaling constraint violations without decompressing, or decompressing only when matching
+ records are found in the case of updates, deletes and upserts, TimescaleDB v2.16.0 speeds
+ up those operations more than 1000x in some update/delete scenarios, and 10x for upserts.
+
+* You can add foreign keys from regular tables to hypertables, with support for all types of cascading options.
+ This is useful for hypertables that partition using sequential IDs, and need to reference these IDs from other tables.
+
+* Lower locking requirements during compression for hypertables with foreign keys
+
+Advanced foreign key handling removes the need for locking referenced tables when new chunks are compressed.
+ DML is no longer blocked on referenced tables while compression runs on a hypertable.
+
+* Improved support for queries on Continuous Aggregates
+
+`INNER/LEFT` and `LATERAL` joins are now supported. Plus, you can now join with multiple regular tables,
+ and have more than one equality operator on join clauses.
+
+**Postgres 13 support removal announcement**
+
+Following the deprecation announcement for Postgres 13 in TimescaleDB v2.13,
+Postgres 13 is no longer supported in TimescaleDB v2.16.
+
+The currently supported Postgres major versions are 14, 15, and 16.
+
+## 📦 Performance, packaging and stability improvements for Timescale Cloud
+
+
+### New plans
+To support evolving customer needs, Timescale Cloud now offers three plans to provide more value, flexibility, and efficiency.
+- **Performance:** for cost-focused, smaller projects. No credit card required to start.
+- **Scale:** for developers handling critical and demanding apps.
+- **Enterprise:** for enterprises with mission-critical apps.
+
+Each plan continues to bill based on hourly usage, primarily for compute you run and storage you consume. You can upgrade or downgrade between Performance and Scale plans via the Console UI at any time. More information about the specifics and differences between these pricing plans can be found [here in the docs](https://docs.timescale.com/about/latest/pricing-and-account-management/).
+
+
+### Improvements to the Timescale Console
+The individual tiles on the services page have been enhanced with new information, including high-availability status. This will let you better assess the state of your services at a glance.
+
+
+### Live migration release v0.0.24
+Improvements:
+- Automatic retries are now available for the initial data copy of the migration
+- Now uses pgcopydb for initial data copy for PG to TSDB migrations also (already did for TS to TS) which has a significant performance boost.
+- Fixes issues with TimescaleDB v2.13.x migrations
+- Support for chunk mapping for hypertables with custom schema and table prefixes
+
+## ⚡ Performance and stability improvements for Timescale Cloud and TimescaleDB
+
+
+The following improvements have been made to Timescale products:
+
+- **Timescale Cloud**:
+ - The connection pooler has been updated and now avoids multiple reloads
+ - The tsdbadmin user can now grant the following roles to other users: `pg_checkpoint`,`pg_monitor`,`pg_signal_backend`,`pg_read_all_stats`,`pg_stat_scan_tables`
+ - Timescale Console is far more reliable.
+
+- **TimescaleDB**
+ - The TimescaleDB v2.15.3 patch release improves handling of multiple unique indexes in a compressed INSERT,
+ removes the recheck of ORDER when querying compressed data, improves memory management in DML functions, improves
+ the tuple lock acquisition for tiered chunks on replicas, and fixes an issue with ORDER BY/GROUP BY in our
+ HashAggregate optimization on PG16. For more information, see the [release note](https://github.com/timescale/timescaledb/releases/tag/2.15.3).
+ - The TimescaleDB v2.15.2 patch release improves sort pushdown for partially compressed chunks, and compress_chunk with
+ a primary space partition. The metadata function is removed from the update script, and hash partitioning on a
+ primary column is disallowed. For more information, see the [release note](https://github.com/timescale/timescaledb/releases/tag/2.15.2).
+
+## ⚡ Performance improvements for live migration to Timescale Cloud
+
+
+The following improvements have been made to the Timescale [live-migration docker image](https://hub.docker.com/r/timescale/live-migration/tags):
+
+- Table-based filtering is now available during live migration.
+- Improvements to pbcopydb increase performance and remove unhelpful warning messages.
+- The user notification log enables you to always select the most recent release for a migration run.
+
+For improved stability and new features, update to the latest [timescale/live-migration](https://hub.docker.com/r/timescale/live-migration/tags) docker image. To learn more, see the [live migration docs](https://docs.timescale.com/migrate/latest/live-migration/).
+
+## 🦙Ollama integration in pgai
+
+
+
+Ollama is now integrated with [pgai](https://github.com/timescale/pgai).
+
+Ollama is the easiest and most popular way to get up and running with open-source
+language models. Think of Ollama as _Docker for LLMs_, enabling easy access and usage
+of a variety of open-source models like Llama 3, Mistral, Phi 3, Gemma, and more.
+
+With the pgai extension integrated in your database, embed Ollama AI into your app using
+SQL. For example:
+
+To learn more, see the [pgai Ollama documentation](https://github.com/timescale/pgai/blob/main/docs/vectorizer/quick-start.md).
+
+## 🧙 Compression Wizard
+
+
+
+The compression wizard is now available on Timescale Cloud. Select a hypertable and be guided through enabling compression through the UI!
+
+To access the compression wizard, navigate to `Explorer`, and select the hypertable you would like to compress. In the top right corner, hover where it says `Compression off`, and open the wizard. You will then be guided through the process of configuring compression for your hypertable, and can compress it directly through the UI.
+
+
+
+## 🏎️💨 High Performance AI Apps With pgvectorscale
+
+
+
+The [vectorscale extension][pgvectorscale] is now available on [Timescale Cloud][signup].
+
+pgvectorscale complements pgvector, the open-source vector data extension for Postgres, and introduces the
+following key innovations for pgvector data:
+
+- A new index type called StreamingDiskANN, inspired by the DiskANN algorithm, based on research from Microsoft.
+- Statistical Binary Quantization: developed by Timescale researchers, This compression method improves on
+ standard Binary Quantization.
+
+On benchmark dataset of 50 million Cohere embeddings (768 dimensions each), Postgres with pgvector and
+pgvectorscale achieves 28x lower p95 latency and 16x higher query throughput compared to Pinecone's storage
+optimized (s1) index for approximate nearest neighbor queries at 99% recall, all at 75% less cost when
+self-hosted on AWS EC2.
+
+To learn more, see the [pgvectorscale documentation][pgvectorscale].
+
+## 🧐Integrate AI Into Your Database Using pgai
+
+
+
+The [pgai extension][pgai] is now available on [Timescale Cloud][signup].
+
+pgai brings embedding and generation AI models closer to the database. With pgai, you can now do the following directly
+from within Postgres in a SQL query:
+
+* Create embeddings for your data.
+* Retrieve LLM chat completions from models like OpenAI GPT4o.
+* Reason over your data and facilitate use cases like classification, summarization, and data enrichment on your existing relational data in Postgres.
+
+To learn more, see the [pgai documentation][pgai].
+
+## 🐅Continuous Aggregate and Hypertable Improvements for TimescaleDB
+
+
+The 2.15.x releases contains performance improvements and bug fixes. Highlights in these releases are:
+
+- Continuous Aggregate now supports `time_bucket` with origin and/or offset.
+- Hypertable compression has the following improvements:
+ - Recommend optimized defaults for segment by and order by when configuring compression through analysis of table configuration and statistics.
+ - Added planner support to check more kinds of WHERE conditions before decompression.
+ This reduces the number of rows that have to be decompressed.
+ - You can now use minmax sparse indexes when you compress columns with btree indexes.
+ - Vectorize filters in the WHERE clause that contain text equality operators and LIKE expressions.
+
+To learn more, see the [TimescaleDB release notes](https://github.com/timescale/timescaledb/releases).
+
+## 🔍 Database Audit Logging with pgaudit
+
+
+The [Postgres Audit extension(pgaudit)](https://github.com/pgaudit/pgaudit/) is now available on [Timescale Cloud][signup].
+pgaudit provides detailed database session and object audit logging in the Timescale
+Cloud logs.
+
+If you have strict security and compliance requirements and need to log all operations
+on the database level, pgaudit can help. You can also export these audit logs to
+[Amazon CloudWatch](https://aws.amazon.com/cloudwatch/).
+
+To learn more, see the [pgaudit documentation](https://github.com/pgaudit/pgaudit/).
+
+## 🌡 International System of Unit Support with postgresql-unit
+
+
+The [SI Units for Postgres extension(unit)](https://github.com/df7cb/postgresql-unit) provides support for the
+[ISU](https://en.wikipedia.org/wiki/International_System_of_Units) in [Timescale Cloud][signup].
+
+You can use Timescale Cloud to solve day-to-day questions. For example, to see what 50°C is in °F, run the following
+query in your Timescale Cloud service:
+
+To learn more, see the [postgresql-unit documentation](https://github.com/df7cb/postgresql-unit).
+
+===== PAGE: https://docs.tigerdata.com/about/timescaledb-editions/ =====
+
+**Examples:**
+
+Example 1 (unknown):
+```unknown
+SELECT * FROM hypertable WHERE timestamp_col > now() - '100 days'::interval
+```
+
+Example 2 (unknown):
+```unknown
+begin;
+insert into users (name, email) values ('john doe', 'john@example.com');
+abort; -- nothing inserted
+```
+
+Example 3 (unknown):
+```unknown
+create temporary table temp_users (email text);
+insert into temp_sales (email) values ('john@example.com');
+-- table will automatically disappear after your session ends
+```
+
+Example 4 (unknown):
+```unknown
+set search_path to 'myschema', 'public';
+```
+
+---
+
+## Create a compression policy
+
+**URL:** llms-txt#create-a-compression-policy
+
+**Contents:**
+- Enable a compression policy
+ - Enabling compression
+- View current compression policy
+- Pause compression policy
+- Remove compression policy
+- Disable compression
+
+Old API since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0) Replaced by Optimize your data for real-time analytics.
+
+You can enable compression on individual hypertables, by declaring which column
+you want to segment by.
+
+## Enable a compression policy
+
+This page uses an example table, called `example`, and segments it by the
+`device_id` column. Every chunk that is more than seven days old is then marked
+to be automatically compressed. The source data is organized like this:
+
+|time|device_id|cpu|disk_io|energy_consumption|
+|-|-|-|-|-|
+|8/22/2019 0:00|1|88.2|20|0.8|
+|8/22/2019 0:05|2|300.5|30|0.9|
+
+### Enabling compression
+
+1. At the `psql` prompt, alter the table:
+
+1. Add a compression policy to compress chunks that are older than seven days:
+
+For more information, see the API reference for
+[`ALTER TABLE (compression)`][alter-table-compression] and
+[`add_compression_policy`][add_compression_policy].
+
+## View current compression policy
+
+To view the compression policy that you've set:
+
+For more information, see the API reference for [`timescaledb_information.jobs`][timescaledb_information-jobs].
+
+## Pause compression policy
+
+To disable a compression policy temporarily, find the corresponding job ID and then call `alter_job` to pause it:
+
+## Remove compression policy
+
+To remove a compression policy, use `remove_compression_policy`:
+
+For more information, see the API reference for
+[`remove_compression_policy`][remove_compression_policy].
+
+## Disable compression
+
+You can disable compression entirely on individual hypertables. This command
+works only if you don't currently have any compressed chunks:
+
+If your hypertable contains compressed chunks, you need to
+[decompress each chunk][decompress-chunks] individually before you can turn off
+compression.
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/compression/modify-compressed-data/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+ALTER TABLE example SET (
+ timescaledb.compress,
+ timescaledb.compress_segmentby = 'device_id'
+ );
+```
+
+Example 2 (sql):
+```sql
+SELECT add_compression_policy('example', INTERVAL '7 days');
+```
+
+Example 3 (sql):
+```sql
+SELECT * FROM timescaledb_information.jobs
+ WHERE proc_name='policy_compression';
+```
+
+Example 4 (sql):
+```sql
+SELECT * FROM timescaledb_information.jobs where proc_name = 'policy_compression' AND relname = 'example'
+```
+
+---
+
+## Compress your data using hypercore
+
+**URL:** llms-txt#compress-your-data-using-hypercore
+
+**Contents:**
+- Optimize your data in the columnstore
+- Take advantage of query speedups
+
+Over time you end up with a lot of data. Since this data is mostly immutable, you can compress it
+to save space and avoid incurring additional cost.
+
+TimescaleDB is built for handling event-oriented data such as time-series and fast analytical queries, it comes with support
+of [hypercore][hypercore] featuring the columnstore.
+
+[Hypercore][hypercore] enables you to store the data in a vastly more efficient format allowing
+up to 90x compression ratio compared to a normal Postgres table. However, this is highly dependent
+on the data and configuration.
+
+[Hypercore][hypercore] is implemented natively in Postgres and does not require special storage
+formats. When you convert your data from the rowstore to the columnstore, TimescaleDB uses
+Postgres features to transform the data into columnar format. The use of a columnar format allows a better
+compression ratio since similar data is stored adjacently. For more details on the columnar format,
+see [hypercore][hypercore].
+
+A beneficial side effect of compressing data is that certain queries are significantly faster, since
+less data has to be read into memory.
+
+## Optimize your data in the columnstore
+
+To compress the data in the `transactions` table, do the following:
+
+1. Connect to your Tiger Cloud service
+
+In [Tiger Cloud Console][services-portal] open an [SQL editor][in-console-editors]. The in-Console editors display the query speed.
+ You can also connect to your service using [psql][connect-using-psql].
+
+1. Convert data to the columnstore:
+
+You can do this either automatically or manually:
+ - [Automatically convert chunks][add_columnstore_policy] in the hypertable to the columnstore at a specific time interval:
+
+- [Manually convert all chunks][convert_to_columnstore] in the hypertable to the columnstore:
+
+## Take advantage of query speedups
+
+Previously, data in the columnstore was segmented by the `block_id` column value.
+This means fetching data by filtering or grouping on that column is
+more efficient. Ordering is set to time descending. This means that when you run queries
+which try to order data in the same way, you see performance benefits.
+
+1. Connect to your Tiger Cloud service
+
+In [Tiger Cloud Console][services-portal] open an [SQL editor][in-console-editors]. The in-Console editors display the query speed.
+
+1. Run the following query:
+
+Performance speedup is of two orders of magnitude, around 15 ms when compressed in the columnstore and
+ 1 second when decompressed in the rowstore.
+
+===== PAGE: https://docs.tigerdata.com/tutorials/blockchain-query/blockchain-dataset/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+CALL add_columnstore_policy('transactions', after => INTERVAL '1d');
+```
+
+Example 2 (sql):
+```sql
+DO $$
+ DECLARE
+ chunk_name TEXT;
+ BEGIN
+ FOR chunk_name IN (SELECT c FROM show_chunks('transactions') c)
+ LOOP
+ RAISE NOTICE 'Converting chunk: %', chunk_name; -- Optional: To see progress
+ CALL convert_to_columnstore(chunk_name);
+ END LOOP;
+ RAISE NOTICE 'Conversion to columnar storage complete for all chunks.'; -- Optional: Completion message
+ END$$;
+```
+
+Example 3 (sql):
+```sql
+WITH recent_blocks AS (
+ SELECT block_id FROM transactions
+ WHERE is_coinbase IS TRUE
+ ORDER BY time DESC
+ LIMIT 5
+ )
+ SELECT
+ t.block_id, count(*) AS transaction_count,
+ SUM(weight) AS block_weight,
+ SUM(output_total_usd) AS block_value_usd
+ FROM transactions t
+ INNER JOIN recent_blocks b ON b.block_id = t.block_id
+ WHERE is_coinbase IS NOT TRUE
+ GROUP BY t.block_id;
+```
+
+---
+
+## ALTER TABLE (Compression)
+
+**URL:** llms-txt#alter-table-(compression)
+
+**Contents:**
+- Samples
+- Required arguments
+- Optional arguments
+- Parameters
+
+Old API since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0) Replaced by ALTER TABLE (Hypercore).
+
+'ALTER TABLE' statement is used to turn on compression and set compression
+options.
+
+By itself, this `ALTER` statement alone does not compress a hypertable. To do so, either create a
+compression policy using the [add_compression_policy][add_compression_policy] function or manually
+compress a specific hypertable chunk using the [compress_chunk][compress_chunk] function.
+
+Configure a hypertable that ingests device data to use compression. Here, if the hypertable
+is often queried about a specific device or set of devices, the compression should be
+segmented using the `device_id` for greater performance.
+
+You can also specify compressed chunk interval without changing other
+compression settings:
+
+To disable the previously set option, set the interval to 0:
+
+## Required arguments
+
+|Name|Type|Description|
+|-|-|-|
+|`timescaledb.compress`|BOOLEAN|Enable or disable compression|
+
+## Optional arguments
+
+|Name|Type| Description |
+|-|-|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+|`timescaledb.compress_orderby`|TEXT| Order used by compression, specified in the same way as the ORDER BY clause in a SELECT query. The default is the descending order of the hypertable's time column. |
+|`timescaledb.compress_segmentby`|TEXT| Column list on which to key the compressed segments. An identifier representing the source of the data such as `device_id` or `tags_id` is usually a good candidate. The default is no `segment by` columns. |
+|`timescaledb.compress_chunk_time_interval`|TEXT| EXPERIMENTAL: Set compressed chunk time interval used to roll chunks into. This parameter compresses every chunk, and then irreversibly merges it into a previous adjacent chunk if possible, to reduce the total number of chunks in the hypertable. Note that chunks will not be split up during decompression. It should be set to a multiple of the current chunk interval. This option can be changed independently of other compression settings and does not require the `timescaledb.compress` argument. |
+
+|Name|Type|Description|
+|-|-|-|
+|`table_name`|TEXT|Hypertable that supports compression|
+|`column_name`|TEXT|Column used to order by or segment by|
+|`interval`|TEXT|Time interval used to roll compressed chunks into|
+
+===== PAGE: https://docs.tigerdata.com/api/compression/hypertable_compression_stats/ =====
+
+**Examples:**
+
+Example 1 (unknown):
+```unknown
+## Samples
+
+Configure a hypertable that ingests device data to use compression. Here, if the hypertable
+is often queried about a specific device or set of devices, the compression should be
+segmented using the `device_id` for greater performance.
+```
+
+Example 2 (unknown):
+```unknown
+You can also specify compressed chunk interval without changing other
+compression settings:
+```
+
+Example 3 (unknown):
+```unknown
+To disable the previously set option, set the interval to 0:
+```
+
+---
+
+## FAQ and troubleshooting
+
+**URL:** llms-txt#faq-and-troubleshooting
+
+**Contents:**
+- Unsupported in live migration
+- Where can I find logs for processes running during live migration?
+- Source and target databases have different TimescaleDB versions
+- Why does live migration log "no tuple identifier" warning?
+- Set REPLICA IDENTITY on Postgres partitioned tables
+- Can I use read/failover replicas as source database for live migration?
+- Can I use live migration with a Postgres connection pooler like PgBouncer?
+- Can I use Tiger Cloud instance as source for live migration?
+- How can I exclude a schema/table from being replicated in live migration?
+- Large migrations blocked
+
+## Unsupported in live migration
+
+Live migration tooling is currently experimental. You may run into the following shortcomings:
+
+- Live migration does not yet support mutable columnstore compression (`INSERT`, `UPDATE`,
+ `DELETE` on data in the columnstore).
+- By default, numeric fields containing `NaN`/`+Inf`/`-Inf` values are not
+ correctly replicated, and will be converted to `NULL`. A workaround is available, but is not enabled by default.
+
+Should you run into any problems, please open a support request before losing
+any time debugging issues.
+You can open a support request directly from [Tiger Cloud Console][support-link],
+or by email to [support@tigerdata.com](mailto:support@tigerdata.com).
+
+## Where can I find logs for processes running during live migration?
+
+Live migration involves several background processes to manage different stages of
+the migration. The logs of these processes can be helpful for troubleshooting
+unexpected behavior. You can find these logs in the `/logs` directory.
+
+## Source and target databases have different TimescaleDB versions
+
+When you migrate a [self-hosted][self hosted] or [Managed Service for TimescaleDB (MST)][mst]
+database to Tiger Cloud, the source database and the destination
+[Tiger Cloud service][timescale-service] must run the same version of TimescaleDB.
+
+Before you start [live migration][live migration]:
+
+1. Check the version of TimescaleDB running on the source database and the
+ target Tiger Cloud service:
+
+1. If the version of TimescaleDB on the source database is lower than your Tiger Cloud service, either:
+ - **Downgrade**: reinstall an older version of TimescaleDB on your Tiger Cloud service that matches the source database:
+
+1. Connect to your Tiger Cloud service and check the versions of TimescaleDB available:
+
+2. If an available TimescaleDB release matches your source database:
+
+1. Uninstall TimescaleDB from your Tiger Cloud service:
+
+1. Reinstall the correct version of TimescaleDB:
+
+You may need to reconnect to your Tiger Cloud service using `psql -X` when you're creating the TimescaleDB extension.
+
+- **Upgrade**: for self-hosted databases, [upgrade TimescaleDB][self hosted upgrade] to match your Tiger Cloud service.
+
+## Why does live migration log "no tuple identifier" warning?
+
+Live migration logs a warning `WARNING: no tuple identifier for UPDATE in table`
+when it cannot determine which specific rows should be updated after receiving an
+`UPDATE` statement from the source database during replication. This occurs when tables
+in the source database that receive `UPDATE` statements lack either a `PRIMARY KEY` or
+a `REPLICA IDENTITY` setting. For live migration to successfully replicate `UPDATE` and
+`DELETE` statements, tables must have either a `PRIMARY KEY` or `REPLICA IDENTITY` set
+as a prerequisite.
+
+## Set REPLICA IDENTITY on Postgres partitioned tables
+
+If your Postgres tables use native partitioning, setting `REPLICA IDENTITY` on the
+root (parent) table will not automatically apply it to the partitioned child tables.
+You must manually set `REPLICA IDENTITY` on each partitioned child table.
+
+## Can I use read/failover replicas as source database for live migration?
+
+Live migration does not support replication from read or failover replicas. You must
+provide a connection string that points directly to your source database for
+live migration.
+
+## Can I use live migration with a Postgres connection pooler like PgBouncer?
+
+Live migration does not support connection poolers. You must provide a
+connection string that points directly to your source and target databases
+for live migration to work smoothly.
+
+## Can I use Tiger Cloud instance as source for live migration?
+
+No, Tiger Cloud cannot be used as a source database for live migration.
+
+## How can I exclude a schema/table from being replicated in live migration?
+
+At present, live migration does not allow for excluding schemas or tables from
+replication, but this feature is expected to be added in future releases.
+However, a workaround is available for skipping table data using the `--skip-table-data` flag.
+For more information, please refer to the help text under the `migrate` subcommand.
+
+## Large migrations blocked
+
+Tiger Cloud automatically manages the underlying disk volume. Due to
+platform limitations, it is only possible to resize the disk once every six
+hours. Depending on the rate at which you're able to copy data, you may be
+affected by this restriction. Affected instances are unable to accept new data
+and error with: `FATAL: terminating connection due to administrator command`.
+
+If you intend on migrating more than 400 GB of data to Tiger Cloud, open a
+support request requesting the required storage to be pre-allocated in your
+Tiger Cloud service.
+
+You can open a support request directly from [Tiger Cloud Console][support-link],
+or by email to [support@tigerdata.com](mailto:support@tigerdata.com).
+
+When `pg_dump` starts, it takes an `ACCESS SHARE` lock on all tables which it
+dumps. This ensures that tables aren't dropped before `pg_dump` is able to drop
+them. A side effect of this is that any query which tries to take an
+`ACCESS EXCLUSIVE` lock on a table is be blocked by the `ACCESS SHARE` lock.
+
+A number of Tiger Cloud-internal processes require taking `ACCESS EXCLUSIVE`
+locks to ensure consistency of the data. The following is a non-exhaustive list
+of potentially affected operations:
+
+- converting a chunk into the columnstore/rowstore and back
+- continuous aggregate refresh (before 2.12)
+- create hypertable with foreign keys, truncate hypertable
+- enable hypercore on a hypertable
+- drop chunks
+
+The most likely impact of the above is that background jobs for retention
+policies, columnstore compression policies, and continuous aggregate refresh policies are
+blocked for the duration of the `pg_dump` command. This may have unintended
+consequences for your database performance.
+
+## Dumping with concurrency
+
+When using the `pg_dump` directory format, it is possible to use concurrency to
+use multiple connections to the source database to dump data. This speeds up
+the dump process. Due to the fact that there are multiple connections, it is
+possible for `pg_dump` to end up in a deadlock situation. When it detects a
+deadlock it aborts the dump.
+
+In principle, any query which takes an `ACCESS EXCLUSIVE` lock on a table
+causes such a deadlock. As mentioned above, some common operations which take
+an `ACCESS EXCLUSIVE` lock are:
+- retention policies
+- columnstore compression policies
+- continuous aggregate refresh policies
+
+If you would like to use concurrency nonetheless, turn off all background jobs
+in the source database before running `pg_dump`, and turn them on once the dump
+is complete. If the dump procedure takes longer than the continuous aggregate
+refresh policy's window, you must manually refresh the continuous aggregate in
+the correct time range. For more information, consult the
+[refresh policies documentation].
+
+To turn off the jobs:
+
+## Restoring with concurrency
+
+If the directory format is used for `pg_dump` and `pg_restore`, concurrency can be
+employed to speed up the process. Unfortunately, loading the tables in the
+`timescaledb_catalog` schema concurrently causes errors. Furthermore, the
+`tsdbadmin` user does not have sufficient privileges to turn off triggers in
+this schema. To get around this limitation, load this schema serially, and then
+load the rest of the database concurrently.
+
+## Ownership of background jobs
+
+The `_timescaledb_config.bgw_jobs` table is used to manage background jobs.
+This includes custom jobs, columnstore compression policies, retention
+policies, and continuous aggregate refresh policies. On Tiger Cloud, this table
+has a trigger which ensures that no database user can create or modify jobs
+owned by another database user. This trigger can provide an obstacle for migrations.
+
+If the `--no-owner` flag is used with `pg_dump` and `pg_restore`, all
+objects in the target database are owned by the user that ran
+`pg_restore`, likely `tsdbadmin`.
+
+If all the background jobs in the source database were owned by a user of the
+same name as the user running the restore (again likely `tsdbadmin`), then
+loading the `_timescaledb_config.bgw_jobs` table should work.
+
+If the background jobs in the source were owned by the `postgres` user, they
+are be automatically changed to be owned by the `tsdbadmin` user. In this case,
+one just needs to verify that the jobs do not make use of privileges that the
+`tsdbadmin` user does not possess.
+
+If background jobs are owned by one or more users other than the user
+employed in restoring, then there could be issues. To work around this
+issue, do not dump this table with `pg_dump`. Provide either
+`--exclude-table-data='_timescaledb_config.bgw_job'` or
+`--exclude-table='_timescaledb_config.bgw_job'` to `pg_dump` to skip
+this table. Then, use `psql` and the `COPY` command to dump and
+restore this table with modified values for the `owner` column.
+
+Once the table has been loaded and the restore completed, you may then use SQL
+to adjust the ownership of the jobs and/or the associated stored procedures and
+functions as you wish.
+
+## Extension availability
+
+There are a vast number of Postgres extensions available in the wild.
+Tiger Cloud supports many of the most popular extensions, but not all extensions.
+Before migrating, check that the extensions you are using are supported on
+Tiger Cloud. Consult the [list of supported extensions].
+
+## TimescaleDB extension in the public schema
+
+When self-hosting, the TimescaleDB extension may be installed in an arbitrary
+schema. Tiger Cloud only supports installing the TimescaleDB extension in the
+`public` schema. How to go about resolving this depends heavily on the
+particular details of the source schema and the migration approach chosen.
+
+Tiger Cloud does not support using custom tablespaces. Providing the
+`--no-tablespaces` flag to `pg_dump` and `pg_restore` when
+dumping/restoring the schema results in all objects being in the
+default tablespace as desired.
+
+## Only one database per instance
+
+While Postgres clusters can contain many databases, Tiger Cloud services are
+limited to a single database. When migrating a cluster with multiple databases
+to Tiger Cloud, one can either migrate each source database to a separate
+Tiger Cloud service or "merge" source databases to target schemas.
+
+## Superuser privileges
+
+The `tsdbadmin` database user is the most powerful available on Tiger Cloud, but it
+is not a true superuser. Review your application for use of superuser privileged
+operations and mitigate before migrating.
+
+## Migrate partial continuous aggregates
+
+In order to improve the performance and compatibility of continuous aggregates, TimescaleDB
+v2.7 replaces _partial_ continuous aggregates with _finalized_ continuous aggregates.
+
+To test your database for partial continuous aggregates, run the following query:
+
+If you have partial continuous aggregates in your database, [migrate them][migrate]
+from partial to finalized before you migrate your database.
+
+If you accidentally migrate partial continuous aggregates across Postgres
+versions, you see the following error when you query any continuous aggregates:
+
+===== PAGE: https://docs.tigerdata.com/ai/mcp-server/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+select extversion from pg_extension where extname = 'timescaledb';
+```
+
+Example 2 (sql):
+```sql
+SELECT version FROM pg_available_extension_versions WHERE name = 'timescaledb' ORDER BY 1 DESC;
+```
+
+Example 3 (sql):
+```sql
+DROP EXTENSION timescaledb;
+```
+
+Example 4 (sql):
+```sql
+CREATE EXTENSION timescaledb VERSION '';
+```
+
+---
+
+## Energy consumption data tutorial - set up compression
+
+**URL:** llms-txt#energy-consumption-data-tutorial---set-up-compression
+
+**Contents:**
+- Compression setup
+- Add a compression policy
+- Taking advantage of query speedups
+
+You have now seen how to create a hypertable for your energy consumption
+dataset and query it. When ingesting a dataset like this
+is seldom necessary to update old data and over time the amount of
+data in the tables grows. Over time you end up with a lot of data and
+since this is mostly immutable you can compress it to save space and
+avoid incurring additional cost.
+
+It is possible to use disk-oriented compression like the support
+offered by ZFS and Btrfs but since TimescaleDB is build for handling
+event-oriented data (such as time-series) it comes with support for
+compressing data in hypertables.
+
+TimescaleDB compression allows you to store the data in a vastly more
+efficient format allowing up to 20x compression ratio compared to a
+normal Postgres table, but this is of course highly dependent on the
+data and configuration.
+
+TimescaleDB compression is implemented natively in Postgres and does
+not require special storage formats. Instead it relies on features of
+Postgres to transform the data into columnar format before
+compression. The use of a columnar format allows better compression
+ratio since similar data is stored adjacently. For more details on how
+the compression format looks, you can look at the [compression
+design][compression-design] section.
+
+A beneficial side-effect of compressing data is that certain queries
+are significantly faster since less data has to be read into
+memory.
+
+1. Connect to the Tiger Cloud service that contains the energy
+ dataset using, for example `psql`.
+1. Enable compression on the table and pick suitable segment-by and
+ order-by column using the `ALTER TABLE` command:
+
+Depending on the choice if segment-by and order-by column you can
+ get very different performance and compression ratio. To learn
+ more about how to pick the correct columns, see
+ [here][segment-by-columns].
+1. You can manually compress all the chunks of the hypertable using
+ `compress_chunk` in this manner:
+
+ You can also [automate compression][automatic-compression] by
+ adding a [compression policy][add_compression_policy] which will
+ be covered below.
+
+1. Now that you have compressed the table you can compare the size of
+ the dataset before and after compression:
+
+This shows a significant improvement in data usage:
+
+## Add a compression policy
+
+To avoid running the compression step each time you have some data to
+compress you can set up a compression policy. The compression policy
+allows you to compress data that is older than a particular age, for
+example, to compress all chunks that are older than 8 days:
+
+Compression policies run on a regular schedule, by default once every
+day, which means that you might have up to 9 days of uncompressed data
+with the setting above.
+
+You can find more information on compression policies in the
+[add_compression_policy][add_compression_policy] section.
+
+## Taking advantage of query speedups
+
+Previously, compression was set up to be segmented by `type_id` column value.
+This means fetching data by filtering or grouping on that column will be
+more efficient. Ordering is also set to `created` descending so if you run queries
+which try to order data with that ordering, you should see performance benefits.
+
+For instance, if you run the query example from previous section:
+
+You should see a decent performance difference when the dataset is compressed and
+when is decompressed. Try it yourself by running the previous query, decompressing
+the dataset and running it again while timing the execution time. You can enable
+timing query times in psql by running:
+
+To decompress the whole dataset, run:
+
+On an example setup, speedup performance observed was an order of magnitude,
+30 ms when compressed vs 360 ms when decompressed.
+
+Try it yourself and see what you get!
+
+===== PAGE: https://docs.tigerdata.com/tutorials/financial-ingest-real-time/financial-ingest-dataset/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+ALTER TABLE metrics
+ SET (
+ timescaledb.compress,
+ timescaledb.compress_segmentby='type_id',
+ timescaledb.compress_orderby='created DESC'
+ );
+```
+
+Example 2 (sql):
+```sql
+SELECT compress_chunk(c) from show_chunks('metrics') c;
+```
+
+Example 3 (sql):
+```sql
+SELECT
+ pg_size_pretty(before_compression_total_bytes) as before,
+ pg_size_pretty(after_compression_total_bytes) as after
+ FROM hypertable_compression_stats('metrics');
+```
+
+Example 4 (sql):
+```sql
+before | after
+ --------+-------
+ 180 MB | 16 MB
+ (1 row)
+```
+
+---
+
+## Tuple decompression limit exceeded by operation
+
+**URL:** llms-txt#tuple-decompression-limit-exceeded-by-operation
+
+
+
+When inserting, updating, or deleting tuples from chunks in the columnstore, it might be necessary to convert tuples to the rowstore. This happens either when you are updating existing tuples or have constraints that need to be verified during insert time. If you happen to trigger a lot of rowstore conversion with a single command, you may end up running out of storage space. For this reason, a limit has been put in place on the number of tuples you can decompress into the rowstore for a single command.
+
+The limit can be increased or turned off (set to 0) like so:
+
+===== PAGE: https://docs.tigerdata.com/_troubleshooting/caggs-queries-fail/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+-- set limit to a milion tuples
+SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 1000000;
+-- disable limit by setting to 0
+SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 0;
+```
+
+---
+
+## Schema modifications
+
+**URL:** llms-txt#schema-modifications
+
+**Contents:**
+- Add a nullable column
+- Add a column with a default value and a NOT NULL constraint
+- Rename a column
+- Drop a column
+
+You can modify the schema of compressed hypertables in recent versions of
+TimescaleDB.
+
+|Schema modification|Before TimescaleDB 2.1|TimescaleDB 2.1 to 2.5|TimescaleDB 2.6 and above|
+|-|-|-|-|
+|Add a nullable column|❌|✅|✅|
+|Add a column with a default value and a `NOT NULL` constraint|❌|❌|✅|
+|Rename a column|❌|✅|✅|
+|Drop a column|❌|❌|✅|
+|Change the data type of a column|❌|❌|❌|
+
+To perform operations that aren't supported on compressed hypertables, first
+[decompress][decompression] the table.
+
+## Add a nullable column
+
+To add a nullable column:
+
+Note that adding constraints to the new column is not supported before
+TimescaleDB v2.6.
+
+## Add a column with a default value and a NOT NULL constraint
+
+To add a column with a default value and a not-null constraint:
+
+You can drop a column from a compressed hypertable, if the column is not an
+`orderby` or `segmentby` column. To drop a column:
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/compression/decompress-chunks/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+ALTER TABLE ADD COLUMN ;
+```
+
+Example 2 (sql):
+```sql
+ALTER TABLE conditions ADD COLUMN device_id integer;
+```
+
+Example 3 (sql):
+```sql
+ALTER TABLE ADD COLUMN
+ NOT NULL DEFAULT ;
+```
+
+Example 4 (sql):
+```sql
+ALTER TABLE conditions ADD COLUMN device_id integer
+ NOT NULL DEFAULT 1;
+```
+
+---
+
+## Compression
+
+**URL:** llms-txt#compression
+
+**Contents:**
+- Restrictions
+
+Old API since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0) Replaced by Hypercore.
+
+Compression functionality is included in Hypercore.
+
+Before you set up compression, you need to
+[configure the hypertable for compression][configure-compression] and then
+[set up a compression policy][add_compression_policy].
+
+Before you set up compression for the first time, read
+the compression
+[blog post](https://www.tigerdata.com/blog/building-columnar-compression-in-a-row-oriented-database)
+and
+[documentation](https://docs.tigerdata.com/use-timescale/latest/compression/).
+
+You can also [compress chunks manually][compress_chunk], instead of using an
+automated compression policy to compress chunks as they age.
+
+Compressed chunks have the following limitations:
+
+* `ROW LEVEL SECURITY` is not supported on compressed chunks.
+* Creation of unique constraints on compressed chunks is not supported. You
+ can add them by disabling compression on the hypertable and re-enabling
+ after constraint creation.
+
+In general, compressing a hypertable imposes some limitations on the types
+of data modifications that you can perform on data inside a compressed chunk.
+
+This table shows changes to the compression feature, added in different versions
+of TimescaleDB:
+
+|TimescaleDB version|Supported data modifications on compressed chunks|
+|-|-|
+|1.5 - 2.0|Data and schema modifications are not supported.|
+|2.1 - 2.2|Schema may be modified on compressed hypertables. Data modification not supported.|
+|2.3|Schema modifications and basic insert of new data is allowed. Deleting, updating and some advanced insert statements are not supported.|
+|2.11|Deleting, updating and advanced insert statements are supported.|
+
+In TimescaleDB 2.1 and later, you can modify the schema of hypertables that
+have compressed chunks. Specifically, you can add columns to and rename existing
+columns of compressed hypertables.
+
+In TimescaleDB v2.3 and later, you can insert data into compressed chunks
+and to enable compression policies on distributed hypertables.
+
+In TimescaleDB v2.11 and later, you can update and delete compressed data.
+You can also use advanced insert statements like `ON CONFLICT` and `RETURNING`.
+
+===== PAGE: https://docs.tigerdata.com/api/distributed-hypertables/ =====
+
+---
diff --git a/skills/timescaledb/references/continuous_aggregates.md b/skills/timescaledb/references/continuous_aggregates.md
new file mode 100644
index 0000000..f457d69
--- /dev/null
+++ b/skills/timescaledb/references/continuous_aggregates.md
@@ -0,0 +1,1880 @@
+# Timescaledb - Continuous Aggregates
+
+**Pages:** 21
+
+---
+
+## Permissions error when migrating a continuous aggregate
+
+**URL:** llms-txt#permissions-error-when-migrating-a-continuous-aggregate
+
+
+
+You might get a permissions error when migrating a continuous aggregate from old
+to new format using `cagg_migrate`. The user performing the migration must have
+the following permissions:
+
+* Select, insert, and update permissions on the tables
+ `_timescale_catalog.continuous_agg_migrate_plan` and
+ `_timescale_catalog.continuous_agg_migrate_plan_step`
+* Usage permissions on the sequence
+ `_timescaledb_catalog.continuous_agg_migrate_plan_step_step_id_seq`
+
+To solve the problem, change to a user capable of granting permissions, and
+grant the following permissions to the user performing the migration:
+
+===== PAGE: https://docs.tigerdata.com/_troubleshooting/compression-high-cardinality/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+GRANT SELECT, INSERT, UPDATE ON TABLE _timescaledb_catalog.continuous_agg_migrate_plan TO ;
+GRANT SELECT, INSERT, UPDATE ON TABLE _timescaledb_catalog.continuous_agg_migrate_plan_step TO ;
+GRANT USAGE ON SEQUENCE _timescaledb_catalog.continuous_agg_migrate_plan_step_step_id_seq TO ;
+```
+
+---
+
+## CREATE MATERIALIZED VIEW (Continuous Aggregate)
+
+**URL:** llms-txt#create-materialized-view-(continuous-aggregate)
+
+**Contents:**
+- Samples
+- Parameters
+
+The `CREATE MATERIALIZED VIEW` statement is used to create continuous
+aggregates. To learn more, see the
+[continuous aggregate how-to guides][cagg-how-tos].
+
+`` is of the form:
+
+The continuous aggregate view defaults to `WITH DATA`. This means that when the
+view is created, it refreshes using all the current data in the underlying
+hypertable or continuous aggregate. This occurs once when the view is created.
+If you want the view to be refreshed regularly, you can use a refresh policy. If
+you do not want the view to update when it is first created, use the
+`WITH NO DATA` parameter. For more information, see
+[`refresh_continuous_aggregate`][refresh-cagg].
+
+Continuous aggregates have some limitations of what types of queries they can
+support. For more information, see the
+[continuous aggregates section][cagg-how-tos].
+
+TimescaleDB v2.17.1 and greater dramatically decrease the amount
+of data written on a continuous aggregate in the presence of a small number of changes,
+reduce the i/o cost of refreshing a continuous aggregate, and generate fewer Write-Ahead
+Logs (WAL), set the`timescaledb.enable_merge_on_cagg_refresh`
+configuration parameter to `TRUE`. This enables continuous aggregate
+refresh to use merge instead of deleting old materialized data and re-inserting.
+
+For more settings for continuous aggregates, see [timescaledb_information.continuous_aggregates][info-views].
+
+Create a daily continuous aggregate view:
+
+Add a thirty day continuous aggregate on top of the same raw hypertable:
+
+Add an hourly continuous aggregate on top of the same raw hypertable:
+
+|Name|Type|Description|
+|-|-|-|
+|``|TEXT|Name (optionally schema-qualified) of continuous aggregate view to create|
+|``|TEXT|Optional list of names to be used for columns of the view. If not given, the column names are calculated from the query|
+|`WITH` clause|TEXT|Specifies options for the continuous aggregate view|
+|``|TEXT|A `SELECT` query that uses the specified syntax|
+
+Required `WITH` clause options:
+
+|Name|Type|Description|
+|-|-|-|
+|`timescaledb.continuous`|BOOLEAN|If `timescaledb.continuous` is not specified, this is a regular PostgresSQL materialized view|
+
+Optional `WITH` clause options:
+
+|Name|Type| Description |Default value|
+|-|-|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-|
+|`timescaledb.chunk_interval`|INTERVAL| Set the chunk interval. The default value is 10x the original hypertable. |
+|`timescaledb.create_group_indexes`|BOOLEAN| Create indexes on the continuous aggregate for columns in its `GROUP BY` clause. Indexes are in the form `(, time_bucket)` |`TRUE`|
+|`timescaledb.finalized`|BOOLEAN| In TimescaleDB 2.7 and above, use the new version of continuous aggregates, which stores finalized results for aggregate functions. Supports all aggregate functions, including ones that use `FILTER`, `ORDER BY`, and `DISTINCT` clauses. |`TRUE`|
+|`timescaledb.materialized_only`|BOOLEAN| Return only materialized data when querying the continuous aggregate view |`TRUE`|
+| `timescaledb.invalidate_using` | TEXT | Since [TimescaleDB v2.22.0](https://github.com/timescale/timescaledb/releases/tag/2.22.0)Set to `wal` to read changes from the WAL using logical decoding, then update the materialization invalidations for continuous aggregates using this information. This reduces the I/O and CPU needed to manage the hypertable invalidation log. Set to `trigger` to collect invalidations whenever there are inserts, updates, or deletes to a hypertable. This default behaviour uses more resources than `wal`. | `trigger` |
+
+For more information, see the [real-time aggregates][real-time-aggregates] section.
+
+===== PAGE: https://docs.tigerdata.com/api/continuous-aggregates/alter_materialized_view/ =====
+
+**Examples:**
+
+Example 1 (unknown):
+```unknown
+`` is of the form:
+```
+
+Example 2 (unknown):
+```unknown
+The continuous aggregate view defaults to `WITH DATA`. This means that when the
+view is created, it refreshes using all the current data in the underlying
+hypertable or continuous aggregate. This occurs once when the view is created.
+If you want the view to be refreshed regularly, you can use a refresh policy. If
+you do not want the view to update when it is first created, use the
+`WITH NO DATA` parameter. For more information, see
+[`refresh_continuous_aggregate`][refresh-cagg].
+
+Continuous aggregates have some limitations of what types of queries they can
+support. For more information, see the
+[continuous aggregates section][cagg-how-tos].
+
+TimescaleDB v2.17.1 and greater dramatically decrease the amount
+of data written on a continuous aggregate in the presence of a small number of changes,
+reduce the i/o cost of refreshing a continuous aggregate, and generate fewer Write-Ahead
+Logs (WAL), set the`timescaledb.enable_merge_on_cagg_refresh`
+configuration parameter to `TRUE`. This enables continuous aggregate
+refresh to use merge instead of deleting old materialized data and re-inserting.
+
+For more settings for continuous aggregates, see [timescaledb_information.continuous_aggregates][info-views].
+
+## Samples
+
+Create a daily continuous aggregate view:
+```
+
+Example 3 (unknown):
+```unknown
+Add a thirty day continuous aggregate on top of the same raw hypertable:
+```
+
+Example 4 (unknown):
+```unknown
+Add an hourly continuous aggregate on top of the same raw hypertable:
+```
+
+---
+
+## Queries fail when defining continuous aggregates but work on regular tables
+
+**URL:** llms-txt#queries-fail-when-defining-continuous-aggregates-but-work-on-regular-tables
+
+Continuous aggregates do not work on all queries. For example, TimescaleDB does not support window functions on
+continuous aggregates. If you use an unsupported function, you see the following error:
+
+The following table summarizes the aggregate functions supported in continuous aggregates:
+
+| Function, clause, or feature |TimescaleDB 2.6 and earlier|TimescaleDB 2.7, 2.8, and 2.9|TimescaleDB 2.10 and later|
+|------------------------------------------------------------|-|-|-|
+| Parallelizable aggregate functions |✅|✅|✅|
+| [Non-parallelizable SQL aggregates][postgres-parallel-agg] |❌|✅|✅|
+| `ORDER BY` |❌|✅|✅|
+| Ordered-set aggregates |❌|✅|✅|
+| Hypothetical-set aggregates |❌|✅|✅|
+| `DISTINCT` in aggregate functions |❌|✅|✅|
+| `FILTER` in aggregate functions |❌|✅|✅|
+| `FROM` clause supports `JOINS` |❌|❌|✅|
+
+DISTINCT works in aggregate functions, not in the query definition. For example, for the table:
+
+- The following works:
+
+- This does not:
+
+===== PAGE: https://docs.tigerdata.com/_troubleshooting/caggs-real-time-previously-materialized-not-shown/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+ERROR: invalid continuous aggregate view
+ SQL state: 0A000
+```
+
+Example 2 (sql):
+```sql
+CREATE TABLE public.candle(
+symbol_id uuid NOT NULL,
+symbol text NOT NULL,
+"time" timestamp with time zone NOT NULL,
+open double precision NOT NULL,
+high double precision NOT NULL,
+low double precision NOT NULL,
+close double precision NOT NULL,
+volume double precision NOT NULL
+);
+```
+
+Example 3 (sql):
+```sql
+CREATE MATERIALIZED VIEW candles_start_end
+ WITH (timescaledb.continuous) AS
+ SELECT time_bucket('1 hour', "time"), COUNT(DISTINCT symbol), first(time, time) as first_candle, last(time, time) as last_candle
+ FROM candle
+ GROUP BY 1;
+```
+
+Example 4 (sql):
+```sql
+CREATE MATERIALIZED VIEW candles_start_end
+ WITH (timescaledb.continuous) AS
+ SELECT DISTINCT ON (symbol)
+ symbol,symbol_id, first(time, time) as first_candle, last(time, time) as last_candle
+ FROM candle
+ GROUP BY symbol_id;
+```
+
+---
+
+## Hierarchical continuous aggregate fails with incompatible bucket width
+
+**URL:** llms-txt#hierarchical-continuous-aggregate-fails-with-incompatible-bucket-width
+
+
+
+If you attempt to create a hierarchical continuous aggregate, you must use
+compatible time buckets. You can't create a continuous aggregate with a
+fixed-width time bucket on top of a continuous aggregate with a variable-width
+time bucket. For more information, see the restrictions section in
+[hierarchical continuous aggregates][h-caggs-restrictions].
+
+===== PAGE: https://docs.tigerdata.com/_troubleshooting/caggs-migrate-permissions/ =====
+
+---
+
+## About data retention with continuous aggregates
+
+**URL:** llms-txt#about-data-retention-with-continuous-aggregates
+
+**Contents:**
+- Data retention on a continuous aggregate itself
+
+You can downsample your data by combining a data retention policy with
+[continuous aggregates][continuous_aggregates]. If you set your refresh policies
+correctly, you can delete old data from a hypertable without deleting it from
+any continuous aggregates. This lets you save on raw data storage while keeping
+summarized data for historical analysis.
+
+To keep your aggregates while dropping raw data, you must be careful about
+refreshing your aggregates. You can delete raw data from the underlying table
+without deleting data from continuous aggregates, so long as you don't refresh
+the aggregate over the deleted data. When you refresh a continuous aggregate,
+TimescaleDB updates the aggregate based on changes in the raw data for the
+refresh window. If it sees that the raw data was deleted, it also deletes the
+aggregate data. To prevent this, make sure that the aggregate's refresh window
+doesn't overlap with any deleted data. For more information, see the following
+example.
+
+As an example, say that you add a continuous aggregate to a `conditions`
+hypertable that stores device temperatures:
+
+This creates a `conditions_summary_daily` aggregate which stores the daily
+temperature per device. The aggregate refreshes every day. Every time it
+refreshes, it updates with any data changes from 7 days ago to 1 day ago.
+
+You should **not** set a 24-hour retention policy on the `conditions`
+hypertable. If you do, chunks older than 1 day are dropped. Then the aggregate
+refreshes based on data changes. Since the data change was to delete data older
+than 1 day, the aggregate also deletes the data. You end up with no data in the
+`conditions_summary_daily` table.
+
+To fix this, set a longer retention policy, for example 30 days:
+
+Now, chunks older than 30 days are dropped. But when the aggregate refreshes, it
+doesn't look for changes older than 30 days. It only looks for changes between 7
+days and 1 day ago. The raw hypertable still contains data for that time period.
+So your aggregate retains the data.
+
+## Data retention on a continuous aggregate itself
+
+You can also apply data retention on a continuous aggregate itself. For example,
+you can keep raw data for 30 days, as mentioned earlier. Meanwhile, you can keep
+daily data for 600 days, and no data beyond that.
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/data-retention/about-data-retention/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+CREATE MATERIALIZED VIEW conditions_summary_daily (day, device, temp)
+WITH (timescaledb.continuous) AS
+ SELECT time_bucket('1 day', time), device, avg(temperature)
+ FROM conditions
+ GROUP BY (1, 2);
+
+SELECT add_continuous_aggregate_policy('conditions_summary_daily', '7 days', '1 day', '1 day');
+```
+
+Example 2 (sql):
+```sql
+SELECT add_retention_policy('conditions', INTERVAL '30 days');
+```
+
+---
+
+## Jobs in TimescaleDB
+
+**URL:** llms-txt#jobs-in-timescaledb
+
+TimescaleDB natively includes some job-scheduling policies, such as:
+
+* [Continuous aggregate policies][caggs] to automatically refresh continuous aggregates
+* [Hypercore policies][setup-hypercore] to optimize and compress historical data
+* [Retention policies][retention] to drop historical data
+* [Reordering policies][reordering] to reorder data within chunks
+
+If these don't cover your use case, you can create and schedule custom-defined jobs to run within
+your database. They help you automate periodic tasks that aren't covered by the native policies.
+
+In this section, you see how to:
+
+* [Create and manage jobs][create-jobs]
+* Set up a [generic data retention][generic-retention] policy that applies across all hypertables
+* Implement [automatic moving of chunks between tablespaces][manage-storage]
+* Automatically [downsample and compress][downsample-compress] older chunks
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/security/ =====
+
+---
+
+## Continuous aggregate doesn't refresh with newly inserted historical data
+
+**URL:** llms-txt#continuous-aggregate-doesn't-refresh-with-newly-inserted-historical-data
+
+
+
+Materialized views are generally used with ordered data. If you insert historic
+data, or data that is not related to the current time, you need to refresh
+policies and reevaluate the values that are dragging from past to present.
+
+You can set up an after insert rule for your hypertable or upsert to trigger
+something that can validate what needs to be refreshed as the data is merged.
+
+Let's say you inserted ordered timeframes named A, B, D, and F, and you already
+have a continuous aggregation looking for this data. If you now insert E, you
+need to refresh E and F. However, if you insert C we'll need to refresh C, D, E
+and F.
+
+1. A, B, D, and F are already materialized in a view with all data.
+1. To insert C, split the data into `AB` and `DEF` subsets.
+1. `AB` are consistent and the materialized data is too; you only need to
+ reuse it.
+1. Insert C, `DEF`, and refresh policies after C.
+
+This can use a lot of resources to process, especially if you have any important
+data in the past that also needs to be brought to the present.
+
+Consider an example where you have 300 columns on a single hypertable and use,
+for example, five of them in a continuous aggregation. In this case, it could
+be hard to refresh and would make more sense to isolate these columns in another
+hypertable. Alternatively, you might create one hypertable per metric and
+refresh them independently.
+
+===== PAGE: https://docs.tigerdata.com/_troubleshooting/locf-queries-null-values-not-missing/ =====
+
+---
+
+## Convert continuous aggregates to the columnstore
+
+**URL:** llms-txt#convert-continuous-aggregates-to-the-columnstore
+
+**Contents:**
+- Enable compression on continuous aggregates
+ - Enabling and disabling compression on continuous aggregates
+- Compression policies on continuous aggregates
+
+Continuous aggregates are often used to downsample historical data. If the data is only used for analytical queries
+and never modified, you can compress the aggregate to save on storage.
+
+Old API since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0) Replaced by Convert continuous aggregates to the columnstore.
+
+Before version
+[2.18.1](https://github.com/timescale/timescaledb/releases/tag/2.18.1), you can't
+refresh the compressed regions of a continuous aggregate. To avoid conflicts
+between compression and refresh, make sure you set `compress_after` to a larger
+interval than the `start_offset` of your [refresh
+policy](https://docs.tigerdata.com/api/latest/continuous-aggregates/add_continuous_aggregate_policy).
+
+Compression on continuous aggregates works similarly to [compression on
+hypertables][compression]. When compression is enabled and no other options are
+provided, the `segment_by` value will be automatically set to the group by
+columns of the continuous aggregate and the `time_bucket` column will be used as
+the `order_by` column in the compression configuration.
+
+## Enable compression on continuous aggregates
+
+You can enable and disable compression on continuous aggregates by setting the
+`compress` parameter when you alter the view.
+
+### Enabling and disabling compression on continuous aggregates
+
+1. For an existing continuous aggregate, at the `psql` prompt, enable
+ compression:
+
+1. Disable compression:
+
+Disabling compression on a continuous aggregate fails if there are compressed
+chunks associated with the continuous aggregate. In this case, you need to
+decompress the chunks, and then drop any compression policy on the continuous
+aggregate, before you disable compression. For more detailed information, see
+the [decompress chunks][decompress-chunks] section:
+
+## Compression policies on continuous aggregates
+
+Before setting up a compression policy on a continuous aggregate, you should set
+up a [refresh policy][refresh-policy]. The compression policy interval should be
+set so that actively refreshed regions are not compressed. This is to prevent
+refresh policies from failing. For example, consider a refresh policy like this:
+
+With this kind of refresh policy, the compression policy needs the
+`compress_after` parameter greater than the `start_offset` parameter of the
+continuous aggregate policy:
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/compression/manual-compression/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+ALTER MATERIALIZED VIEW cagg_name set (timescaledb.compress = true);
+```
+
+Example 2 (sql):
+```sql
+ALTER MATERIALIZED VIEW cagg_name set (timescaledb.compress = false);
+```
+
+Example 3 (sql):
+```sql
+SELECT decompress_chunk(c, true) FROM show_chunks('cagg_name') c;
+```
+
+Example 4 (sql):
+```sql
+SELECT add_continuous_aggregate_policy('cagg_name',
+ start_offset => INTERVAL '30 days',
+ end_offset => INTERVAL '1 day',
+ schedule_interval => INTERVAL '1 hour');
+```
+
+---
+
+## Time and continuous aggregates
+
+**URL:** llms-txt#time-and-continuous-aggregates
+
+**Contents:**
+- Declare an explicit timezone
+- Integer-based time
+
+Functions that depend on a local timezone setting inside a continuous aggregate
+are not supported. You cannot adjust to a local time because the timezone setting
+changes from user to user.
+
+To manage this, you can use explicit timezones in the view definition.
+Alternatively, you can create your own custom aggregation scheme for tables that
+use an integer time column.
+
+## Declare an explicit timezone
+
+The most common method of working with timezones is to declare an explicit
+timezone in the view query.
+
+1. At the `psql`prompt, create the view and declare the timezone:
+
+1. Alternatively, you can cast to a timestamp after the view using `SELECT`:
+
+## Integer-based time
+
+Date and time is usually expressed as year-month-day and hours:minutes:seconds.
+Most TimescaleDB databases use a [date/time-type][postgres-date-time] column to
+express the date and time. However, in some cases, you might need to convert
+these common time and date formats to a format that uses an integer. The most
+common integer time is Unix epoch time, which is the number of seconds since the
+Unix epoch of 1970-01-01, but other types of integer-based time formats are
+possible.
+
+These examples use a hypertable called `devices` that contains CPU and disk
+usage information. The devices measure time using the Unix epoch.
+
+To create a hypertable that uses an integer-based column as time, you need to
+provide the chunk time interval. In this case, each chunk is 10 minutes.
+
+1. At the `psql` prompt, create a hypertable and define the integer-based time column and chunk time interval:
+
+If you are self-hosting TimescaleDB v2.19.3 and below, create a [Postgres relational table][pg-create-table],
+then convert it using [create_hypertable][create_hypertable]. You then enable hypercore with a call
+to [ALTER TABLE][alter_table_hypercore].
+
+To define a continuous aggregate on a hypertable that uses integer-based time,
+you need to have a function to get the current time in the correct format, and
+set it for the hypertable. You can do this with the
+[`set_integer_now_func`][api-set-integer-now-func]
+function. It can be defined as a regular Postgres function, but needs to be
+[`STABLE`][pg-func-stable],
+take no arguments, and return an integer value of the same type as the time
+column in the table. When you have set up the time-handling, you can create the
+continuous aggregate.
+
+1. At the `psql` prompt, set up a function to convert the time to the Unix epoch:
+
+1. Create the continuous aggregate for the `devices` table:
+
+1. Insert some rows into the table:
+
+This command uses the `tablefunc` extension to generate a normal
+ distribution, and uses the `row_number` function to turn it into a
+ cumulative sequence.
+1. Check that the view contains the correct data:
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/continuous-aggregates/materialized-hypertables/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+CREATE MATERIALIZED VIEW device_summary
+ WITH (timescaledb.continuous)
+ AS
+ SELECT
+ time_bucket('1 hour', observation_time) AS bucket,
+ min(observation_time AT TIME ZONE 'EST') AS min_time,
+ device_id,
+ avg(metric) AS metric_avg,
+ max(metric) - min(metric) AS metric_spread
+ FROM
+ device_readings
+ GROUP BY bucket, device_id;
+```
+
+Example 2 (sql):
+```sql
+SELECT min_time::timestamp FROM device_summary;
+```
+
+Example 3 (sql):
+```sql
+CREATE TABLE devices(
+ time BIGINT, -- Time in minutes since epoch
+ cpu_usage INTEGER, -- Total CPU usage
+ disk_usage INTEGER, -- Total disk usage
+ PRIMARY KEY (time)
+ ) WITH (
+ tsdb.hypertable,
+ tsdb.partition_column='time',
+ tsdb.chunk_interval='10'
+ );
+```
+
+Example 4 (sql):
+```sql
+CREATE FUNCTION current_epoch() RETURNS BIGINT
+ LANGUAGE SQL STABLE AS $$
+ SELECT EXTRACT(EPOCH FROM CURRENT_TIMESTAMP)::bigint;$$;
+
+ SELECT set_integer_now_func('devices', 'current_epoch');
+```
+
+---
+
+## Create an index on a continuous aggregate
+
+**URL:** llms-txt#create-an-index-on-a-continuous-aggregate
+
+**Contents:**
+- Automatically created indexes
+ - Turn off automatic index creation
+- Manually create and drop indexes
+ - Limitations on created indexes
+
+By default, some indexes are automatically created when you create a continuous
+aggregate. You can change this behavior. You can also manually create and drop
+indexes.
+
+## Automatically created indexes
+
+When you create a continuous aggregate, an index is automatically created for
+each `GROUP BY` column. The index is a composite index, combining the `GROUP BY`
+column with the `time_bucket` column.
+
+For example, if you define a continuous aggregate view with `GROUP BY device,
+location, bucket`, two composite indexes are created: one on `{device, bucket}`
+and one on `{location, bucket}`.
+
+### Turn off automatic index creation
+
+To turn off automatic index creation, set `timescaledb.create_group_indexes` to
+`false` when you create the continuous aggregate.
+
+## Manually create and drop indexes
+
+You can use a regular Postgres statement to create or drop an index on a
+continuous aggregate.
+
+For example, to create an index on `avg_temp` for a materialized hypertable
+named `weather_daily`:
+
+Indexes are created under the `_timescaledb_internal` schema, where the
+continuous aggregate data is stored. To drop the index, specify the schema. For
+example, to drop the index `avg_temp_idx`, run:
+
+### Limitations on created indexes
+
+In TimescaleDB v2.7 and later, you can create an index on any column in the
+materialized view. This includes aggregated columns, such as those storing sums
+and averages. In earlier versions of TimescaleDB, you can't create an index on
+an aggregated column.
+
+You can't create unique indexes on a continuous aggregate, in any of the
+TimescaleDB versions.
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/continuous-aggregates/about-continuous-aggregates/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+CREATE MATERIALIZED VIEW conditions_daily
+ WITH (timescaledb.continuous, timescaledb.create_group_indexes=false)
+ AS
+ ...
+```
+
+Example 2 (sql):
+```sql
+CREATE INDEX avg_temp_idx ON weather_daily (avg_temp);
+```
+
+Example 3 (sql):
+```sql
+DROP INDEX _timescaledb_internal.avg_temp_idx
+```
+
+---
+
+## ALTER MATERIALIZED VIEW (Continuous Aggregate)
+
+**URL:** llms-txt#alter-materialized-view-(continuous-aggregate)
+
+**Contents:**
+- Samples
+- Arguments
+
+You use the `ALTER MATERIALIZED VIEW` statement to modify some of the `WITH`
+clause [options][create_materialized_view] for a continuous aggregate view. You can only set the `continuous` and `create_group_indexes` options when you [create a continuous aggregate][create_materialized_view]. `ALTER MATERIALIZED VIEW` also supports the following
+[Postgres clauses][postgres-alterview] on the continuous aggregate view:
+
+* `RENAME TO`: rename the continuous aggregate view
+* `RENAME [COLUMN]`: rename the continuous aggregate column
+* `SET SCHEMA`: set the new schema for the continuous aggregate view
+* `SET TABLESPACE`: move the materialization of the continuous aggregate view to the new tablespace
+* `OWNER TO`: set a new owner for the continuous aggregate view
+
+- Enable real-time aggregates for a continuous aggregate:
+
+- Enable hypercore for a continuous aggregate Since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0):
+
+- Rename a column for a continuous aggregate:
+
+| Name | Type | Default | Required | Description |
+|---------------------------------------------------------------------------|-----------|------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `view_name` | TEXT | - | ✖ | The name of the continuous aggregate view to be altered. |
+| `timescaledb.materialized_only` | BOOLEAN | `true` | ✖ | Enable real-time aggregation. |
+| `timescaledb.enable_columnstore` | BOOLEAN | `true` | ✖ | Since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0) Enable columnstore. Effectively the same as `timescaledb.compress`. |
+| `timescaledb.compress` | TEXT | Disabled. | ✖ | Enable compression. |
+| `timescaledb.orderby` | TEXT | Descending order on the time column in `table_name`. | ✖ | Since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0) Set the order in which items are used in the columnstore. Specified in the same way as an `ORDER BY` clause in a `SELECT` query. |
+| `timescaledb.compress_orderby` | TEXT | Descending order on the time column in `table_name`. | ✖ | Set the order used by compression. Specified in the same way as the `ORDER BY` clause in a `SELECT` query. |
+| `timescaledb.segmentby` | TEXT | No segementation by column. | ✖ | Since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0) Set the list of columns used to segment data in the columnstore for `table`. An identifier representing the source of the data such as `device_id` or `tags_id` is usually a good candidate. |
+| `timescaledb.compress_segmentby` | TEXT | No segementation by column. | ✖ | Set the list of columns used to segment the compressed data. An identifier representing the source of the data such as `device_id` or `tags_id` is usually a good candidate. |
+| `column_name` | TEXT | - | ✖ | Set the name of the column to order by or segment by. |
+| `timescaledb.compress_chunk_time_interval` | TEXT | - | ✖ | Reduce the total number of compressed/columnstore chunks for `table`. If you set `compress_chunk_time_interval`, compressed/columnstore chunks are merged with the previous adjacent chunk within `chunk_time_interval` whenever possible. These chunks are irreversibly merged. If you call to [decompress][decompress]/[convert_to_rowstore][convert_to_rowstore], merged chunks are not split up. You can call `compress_chunk_time_interval` independently of other compression settings; `timescaledb.compress`/`timescaledb.enable_columnstore` is not required. |
+| `timescaledb.enable_cagg_window_functions` | BOOLEAN | `false` | ✖ | EXPERIMENTAL: enable window functions on continuous aggregates. Support is experimental, as there is a risk of data inconsistency. For example, in backfill scenarios, buckets could be missed. |
+| `timescaledb.chunk_interval` (formerly `timescaledb.chunk_time_interval`) | INTERVAL | 10x the original hypertable. | ✖ | Set the chunk interval. Renamed in TimescaleDB V2.20. |
+
+===== PAGE: https://docs.tigerdata.com/api/continuous-aggregates/cagg_migrate/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+ALTER MATERIALIZED VIEW contagg_view SET (timescaledb.materialized_only = false);
+```
+
+Example 2 (sql):
+```sql
+ALTER MATERIALIZED VIEW contagg_view SET (
+ timescaledb.enable_columnstore = true,
+ timescaledb.segmentby = 'symbol' );
+```
+
+Example 3 (sql):
+```sql
+ALTER MATERIALIZED VIEW contagg_view RENAME COLUMN old_name TO new_name;
+```
+
+---
+
+## cagg_migrate()
+
+**URL:** llms-txt#cagg_migrate()
+
+**Contents:**
+- Required arguments
+- Optional arguments
+
+Migrate a continuous aggregate from the old format to the new format introduced
+in TimescaleDB 2.7.
+
+TimescaleDB 2.7 introduced a new format for continuous aggregates that improves
+performance. It also makes continuous aggregates compatible with more types of
+SQL queries.
+
+The new format, also called the finalized format, stores the continuous
+aggregate data exactly as it appears in the final view. The old format, also
+called the partial format, stores the data in a partially aggregated state.
+
+Use this procedure to migrate continuous aggregates from the old format to the
+new format.
+
+For more information, see the [migration how-to guide][how-to-migrate].
+
+There are known issues with `cagg_migrate()` in version TimescaleDB 2.8.0.
+Upgrade to version 2.8.1 or above before using it.
+
+## Required arguments
+
+|Name|Type|Description|
+|-|-|-|
+|`cagg`|`REGCLASS`|The continuous aggregate to migrate|
+
+## Optional arguments
+
+|Name|Type|Description|
+|-|-|-|
+|`override`|`BOOLEAN`|If false, the old continuous aggregate keeps its name. The new continuous aggregate is named `_new`. If true, the new continuous aggregate gets the old name. The old continuous aggregate is renamed `_old`. Defaults to `false`.|
+|`drop_old`|`BOOLEAN`|If true, the old continuous aggregate is deleted. Must be used together with `override`. Defaults to `false`.|
+
+===== PAGE: https://docs.tigerdata.com/api/continuous-aggregates/drop_materialized_view/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+CALL cagg_migrate (
+ cagg REGCLASS,
+ override BOOLEAN DEFAULT FALSE,
+ drop_old BOOLEAN DEFAULT FALSE
+);
+```
+
+---
+
+## Dropping data
+
+**URL:** llms-txt#dropping-data
+
+**Contents:**
+- Drop a continuous aggregate view
+ - Dropping a continuous aggregate view
+- Drop raw data from a hypertable
+- PolicyVisualizerDownsampling
+
+When you are working with continuous aggregates, you can drop a view, or you can
+drop raw data from the underlying hypertable or from the continuous aggregate
+itself. A combination of [refresh][cagg-refresh] and data retention policies
+can help you downsample your data. This lets you keep historical data at a
+lower granularity than recent data.
+
+However, you should be aware if a retention policy is likely to drop raw data
+from your hypertable that you need in your continuous aggregate.
+
+To simplify the process of setting up downsampling, you can use
+the [visualizer and code generator][visualizer].
+
+## Drop a continuous aggregate view
+
+You can drop a continuous aggregate view using the `DROP MATERIALIZED VIEW`
+command. This command also removes refresh policies defined on the continuous
+aggregate. It does not drop the data from the underlying hypertable.
+
+### Dropping a continuous aggregate view
+
+1. From the `psql`prompt, drop the view:
+
+## Drop raw data from a hypertable
+
+If you drop data from a hypertable used in a continuous aggregate it can lead to
+problems with your continuous aggregate view. In many cases, dropping underlying
+data replaces the aggregate with NULL values, which can lead to unexpected
+results in your view.
+
+You can drop data from a hypertable using `drop_chunks` in the usual way, but
+before you do so, always check that the chunk is not within the refresh window
+of a continuous aggregate that still needs the data. This is also important if
+you are manually refreshing a continuous aggregate. Calling
+`refresh_continuous_aggregate` on a region containing dropped chunks
+recalculates the aggregate without the dropped data.
+
+If a continuous aggregate is refreshing when data is dropped because of a
+retention policy, the aggregate is updated to reflect the loss of data. If you
+need to retain the continuous aggregate after dropping the underlying data, set
+the `start_offset` value of the aggregate policy to a smaller interval than the
+`drop_after` parameter of the retention policy.
+
+For more information, see the
+[data retention documentation][data-retention-with-continuous-aggregates].
+
+## PolicyVisualizerDownsampling
+
+Refer to the installation documentation for detailed setup instructions.
+
+[data-retention-with-continuous-aggregates]:
+ /use-timescale/:currentVersion:/data-retention/data-retention-with-continuous-aggregates
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/continuous-aggregates/migrate/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+DROP MATERIALIZED VIEW view_name;
+```
+
+---
+
+## Continuous aggregates on continuous aggregates
+
+**URL:** llms-txt#continuous-aggregates-on-continuous-aggregates
+
+**Contents:**
+- Create a continuous aggregate on top of another continuous aggregate
+- Use real-time aggregation with hierarchical continuous aggregates
+- Roll up calculations
+- Restrictions
+
+The more data you have, the more likely you are to run a more sophisticated analysis on it. When a simple one-level aggregation is not enough, TimescaleDB lets you create continuous aggregates on top of other continuous aggregates. This way, you summarize data at different levels of granularity, while still saving resources with precomputing.
+
+For example, you might have an hourly continuous aggregate that summarizes minute-by-minute
+data. To get a daily summary, you can create a new continuous aggregate on top
+of your hourly aggregate. This is more efficient than creating the daily
+aggregate on top of the original hypertable, because you can reuse the
+calculations from the hourly aggregate.
+
+This feature is available in TimescaleDB v2.9 and later.
+
+## Create a continuous aggregate on top of another continuous aggregate
+
+Creating a continuous aggregate on top of another continuous aggregate works the
+same way as creating it on top of a hypertable. In your query, select from a
+continuous aggregate rather than from the hypertable, and use the time-bucketed
+column from the existing continuous aggregate as your time column.
+
+For more information, see the instructions for
+[creating a continuous aggregate][create-cagg].
+
+## Use real-time aggregation with hierarchical continuous aggregates
+
+In TimescaleDB v2.13 and later, real-time aggregates are **DISABLED** by default. In earlier versions, real-time aggregates are **ENABLED** by default; when you create a continuous aggregate, queries to that view include the results from the most recent raw data.
+
+Real-time aggregates always return up-to-date data in response to queries. They accomplish this by
+joining the materialized data in the continuous aggregate with unmaterialized
+raw data from the source table or view.
+
+When continuous aggregates are stacked, each continuous aggregate is only aware
+of the layer immediately below. The joining of unmaterialized data happens
+recursively until it reaches the bottom layer, giving you access to recent data
+down to that layer.
+
+If you keep all continuous aggregates in the stack as real-time aggregates, the
+bottom layer is the source hypertable. That means every continuous aggregate in
+the stack has access to all recent data.
+
+If there is a non-real-time continuous aggregate somewhere in the stack, the
+recursive joining stops at that non-real-time continuous aggregate. Higher-level
+continuous aggregates don't receive any unmaterialized data from lower levels.
+
+For example, say you have the following continuous aggregates:
+
+* A real-time hourly continuous aggregate on the source hypertable
+* A real-time daily continuous aggregate on the hourly continuous aggregate
+* A non-real-time, or materialized-only, monthly continuous aggregate on the
+ daily continuous aggregate
+* A real-time yearly continuous aggregate on the monthly continuous aggregate
+
+Queries on the hourly and daily continuous aggregates include real-time,
+non-materialized data from the source hypertable. Queries on the monthly
+continuous aggregate only return already-materialized data. Queries on the
+yearly continuous aggregate return materialized data from the yearly continuous
+aggregate itself, plus more recent data from the monthly continuous aggregate.
+However, the data is limited to what is already materialized in the monthly
+continuous aggregate, and doesn't get even more recent data from the source
+hypertable. This happens because the materialized-only continuous aggregate
+provides a stopping point, and the yearly continuous aggregate is unaware of any
+layers beyond that stopping point. This is similar to
+[how stacked views work in Postgres][postgresql-views].
+
+To make queries on the yearly continuous aggregate access all recent data, you
+can either:
+
+* Make the monthly continuous aggregate real-time, or
+* Redefine the yearly continuous aggregate on top of the daily continuous
+ aggregate.
+
+
+
+## Roll up calculations
+
+When summarizing already-summarized data, be aware of how stacked calculations
+work. Not all calculations return the correct result if you stack them.
+
+For example, if you take the maximum of several subsets, then take the maximum
+of the maximums, you get the maximum of the entire set. But if you take the
+average of several subsets, then take the average of the averages, that can
+result in a different figure than the average of all the data.
+
+To simplify such calculations when using continuous aggregates on top of
+continuous aggregates, you can use the [hyperfunctions][hyperfunctions] from
+TimescaleDB Toolkit, such as the [statistical aggregates][stats-aggs]. These
+hyperfunctions are designed with a two-step aggregation pattern that allows you
+to roll them up into larger buckets. The first step creates a summary aggregate
+that can be rolled up, just as a maximum can be rolled up. You can store this
+aggregate in your continuous aggregate. Then, you can call an accessor function
+as a second step when you query from your continuous aggregate. This accessor
+takes the stored data from the summary aggregate and returns the final result.
+
+For example, you can create an hourly continuous aggregate using `percentile_agg`
+over a hypertable, like this:
+
+To then stack another daily continuous aggregate over it, you can use a `rollup`
+function, like this:
+
+The `mean` function of the TimescaleDB Toolkit is used to calculate the concrete
+mean value of the rolled up values. The additional `percentile_daily` attribute
+contains the raw rolled up values, which can be used in an additional continuous
+aggregate on top of this continuous aggregate (for example a continuous
+aggregate for the daily values).
+
+For more information and examples about using `rollup` functions to stack
+calculations, see the [percentile approximation API documentation][percentile_agg_api].
+
+There are some restrictions when creating a continuous aggregate on top of
+another continuous aggregate. In most cases, these restrictions are in place to
+ensure valid time-bucketing:
+
+* You can only create a continuous aggregate on top of a finalized continuous
+ aggregate. This new finalized format is the default for all continuous
+ aggregates created since TimescaleDB 2.7. If you need to create a continuous
+ aggregate on top of a continuous aggregate in the old format, you need to
+ [migrate your continuous aggregate][migrate-cagg] to the new format first.
+
+* The time bucket of a continuous aggregate should be greater than or equal to
+ the time bucket of the underlying continuous aggregate. It also needs to be
+ a multiple of the underlying time bucket. For example, you can rebucket an
+ hourly continuous aggregate into a new continuous aggregate with time
+ buckets of 6 hours. You can't rebucket the hourly continuous aggregate into
+ a new continuous aggregate with time buckets of 90 minutes, because 90
+ minutes is not a multiple of 1 hour.
+
+* A continuous aggregate with a fixed-width time bucket can't be created on
+ top of a continuous aggregate with a variable-width time bucket. Fixed-width
+ time buckets are time buckets defined in seconds, minutes, hours, and days,
+ because those time intervals are always the same length. Variable-width time
+ buckets are time buckets defined in months or years, because those time
+ intervals vary by the month or on leap years. This limitation prevents a
+ case such as trying to rebucket monthly buckets into `61 day` buckets, where
+ there is no good mapping between time buckets for month combinations such as
+ July/August (62 days).
+
+Note that even though weeks are fixed-width intervals, you can't use monthly
+ or yearly time buckets on top of weekly time buckets for the same reason.
+ The number of weeks in a month or year is usually not an integer.
+
+However, you can stack a variable-width time bucket on top of a fixed-width
+ time bucket. For example, creating a monthly continuous aggregate on top of
+ a daily continuous aggregate works, and is the one of the main use cases for
+ this feature.
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/hypercore/secondary-indexes/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+CREATE MATERIALIZED VIEW response_times_hourly
+WITH (timescaledb.continuous)
+AS SELECT
+ time_bucket('1 h'::interval, ts) as bucket,
+ api_id,
+ avg(response_time_ms),
+ percentile_agg(response_time_ms) as percentile_hourly
+FROM response_times
+GROUP BY 1, 2;
+```
+
+Example 2 (sql):
+```sql
+CREATE MATERIALIZED VIEW response_times_daily
+WITH (timescaledb.continuous)
+AS SELECT
+ time_bucket('1 d'::interval, bucket) as bucket_daily,
+ api_id,
+ mean(rollup(percentile_hourly)) as mean,
+ rollup(percentile_hourly) as percentile_daily
+FROM response_times_hourly
+GROUP BY 1, 2;
+```
+
+---
+
+## Continuous aggregate watermark is in the future
+
+**URL:** llms-txt#continuous-aggregate-watermark-is-in-the-future
+
+**Contents:**
+ - Creating a new continuous aggregate with an explicit refresh window
+
+
+
+Continuous aggregates use a watermark to indicate which time buckets have
+already been materialized. When you query a continuous aggregate, your query
+returns materialized data from before the watermark. It returns real-time,
+non-materialized data from after the watermark.
+
+In certain cases, the watermark might be in the future. If this happens, all
+buckets, including the most recent bucket, are materialized and below the
+watermark. No real-time data is returned.
+
+This might happen if you refresh your continuous aggregate over the time window
+`, NULL`, which materializes all recent data. It might also happen
+if you create a continuous aggregate using the `WITH DATA` option. This also
+implicitly refreshes your continuous aggregate with a window of `NULL, NULL`.
+
+To fix this, create a new continuous aggregate using the `WITH NO DATA` option.
+Then use a policy to refresh this continuous aggregate over an explicit time
+window.
+
+### Creating a new continuous aggregate with an explicit refresh window
+
+1. Create a continuous aggregate using the `WITH NO DATA` option:
+
+1. Refresh the continuous aggregate using a policy with an explicit
+ `end_offset`. For example:
+
+1. Check your new continuous aggregate's watermark to make sure it is in the
+ past, not the future.
+
+Get the ID for the materialization hypertable that contains the actual
+ continuous aggregate data:
+
+1. Use the returned ID to query for the watermark's timestamp:
+
+For TimescaleDB >= 2.12:
+
+For TimescaleDB < 2.12:
+
+If you choose to delete your old continuous aggregate after creating a new one,
+beware of historical data loss. If your old continuous aggregate contained data
+that you dropped from your original hypertable, for example through a data
+retention policy, the dropped data is not included in your new continuous
+aggregate.
+
+===== PAGE: https://docs.tigerdata.com/_troubleshooting/scheduled-jobs-stop-running/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+CREATE MATERIALIZED VIEW
+ WITH (timescaledb.continuous)
+ AS SELECT time_bucket('', ),
+ ,
+ ...
+ FROM
+ GROUP BY bucket,
+ WITH NO DATA;
+```
+
+Example 2 (sql):
+```sql
+SELECT add_continuous_aggregate_policy('',
+ start_offset => INTERVAL '30 day',
+ end_offset => INTERVAL '1 hour',
+ schedule_interval => INTERVAL '1 hour');
+```
+
+Example 3 (sql):
+```sql
+SELECT id FROM _timescaledb_catalog.hypertable
+ WHERE table_name=(
+ SELECT materialization_hypertable_name
+ FROM timescaledb_information.continuous_aggregates
+ WHERE view_name=''
+ );
+```
+
+Example 4 (sql):
+```sql
+SELECT COALESCE(
+ _timescaledb_functions.to_timestamp(_timescaledb_functions.cagg_watermark()),
+ '-infinity'::timestamp with time zone
+ );
+```
+
+---
+
+## About continuous aggregates
+
+**URL:** llms-txt#about-continuous-aggregates
+
+**Contents:**
+- Types of aggregation
+- Continuous aggregates on continuous aggregates
+- Continuous aggregates with a `JOIN` clause
+ - JOIN examples
+- Function support
+- Components of a continuous aggregate
+ - Materialization hypertable
+ - Materialization engine
+ - Invalidation engine
+
+In modern applications, data usually grows very quickly. This means that aggregating
+it into useful summaries can become very slow. If you are collecting data very frequently, you might want to aggregate your
+data into minutes or hours instead. For example, if an IoT device takes
+temperature readings every second, you might want to find the average temperature
+for each hour. Every time you run this query, the database needs to scan the
+entire table and recalculate the average. TimescaleDB makes aggregating data lightning fast, accurate, and easy with continuous aggregates.
+
+
+
+Continuous aggregates in TimescaleDB are a kind of hypertable that is refreshed automatically
+in the background as new data is added, or old data is modified. Changes to your
+dataset are tracked, and the hypertable behind the continuous aggregate is
+automatically updated in the background.
+
+Continuous aggregates have a much lower maintenance burden than regular Postgres materialized
+views, because the whole view is not created from scratch on each refresh. This
+means that you can get on with working your data instead of maintaining your
+database.
+
+Because continuous aggregates are based on hypertables, you can query them in exactly the same way as your other tables. This includes continuous aggregates in the rowstore, compressed into the [columnstore][hypercore],
+or [tiered to object storage][data-tiering]. You can even create [continuous aggregates on top of your continuous aggregates][hierarchical-caggs], for an even more fine-tuned aggregation.
+
+[Real-time aggregation][real-time-aggregation] enables you to combine pre-aggregated data from the materialized view with the most recent raw data. This gives you up-to-date results on every query. In TimescaleDB v2.13 and later, real-time aggregates are **DISABLED** by default. In earlier versions, real-time aggregates are **ENABLED** by default; when you create a continuous aggregate, queries to that view include the results from the most recent raw data.
+
+## Types of aggregation
+
+There are three main ways to make aggregation easier: materialized views,
+continuous aggregates, and real-time aggregates.
+
+[Materialized views][pg-materialized views] are a standard Postgres function.
+They are used to cache the result of a complex query so that you can reuse it
+later on. Materialized views do not update regularly, although you can manually
+refresh them as required.
+
+[Continuous aggregates][about-caggs] are a TimescaleDB-only feature. They work in
+a similar way to a materialized view, but they are updated automatically in the
+background, as new data is added to your database. Continuous aggregates are
+updated continuously and incrementally, which means they are less resource
+intensive to maintain than materialized views. Continuous aggregates are based
+on hypertables, and you can query them in the same way as you do your other
+tables.
+
+[Real-time aggregates][real-time-aggs] are a TimescaleDB-only feature. They are
+the same as continuous aggregates, but they add the most recent raw data to the
+previously aggregated data to provide accurate and up-to-date results, without
+needing to aggregate data as it is being written.
+
+## Continuous aggregates on continuous aggregates
+
+You can create a continuous aggregate on top of another continuous aggregate.
+This allows you to summarize data at different granularity. For example, you
+might have a raw hypertable that contains second-by-second data. Create a
+continuous aggregate on the hypertable to calculate hourly data. To calculate
+daily data, create a continuous aggregate on top of your hourly continuous
+aggregate.
+
+For more information, see the documentation about
+[continuous aggregates on continuous aggregates][caggs-on-caggs].
+
+## Continuous aggregates with a `JOIN` clause
+
+Continuous aggregates support the following JOIN features:
+
+| Feature | TimescaleDB < 2.10.x | TimescaleDB <= 2.15.x | TimescaleDB >= 2.16.x|
+|-|-|-|-|
+|INNER JOIN|❌|✅|✅|
+|LEFT JOIN|❌|❌|✅|
+|LATERAL JOIN|❌|❌|✅|
+|Joins between **ONE** hypertable and **ONE** standard Postgres table|❌|✅|✅|
+|Joins between **ONE** hypertable and **MANY** standard Postgres tables|❌|❌|✅|
+|Join conditions must be equality conditions, and there can only be **ONE** `JOIN` condition|❌|✅|✅|
+|Any join conditions|❌|❌|✅|
+
+JOINS in TimescaleDB must meet the following conditions:
+
+* Only the changes to the hypertable are tracked, and they are updated in the
+ continuous aggregate when it is refreshed. Changes to standard
+ Postgres table are not tracked.
+* You can use an `INNER`, `LEFT`, and `LATERAL` joins; no other join type is supported.
+* Joins on the materialized hypertable of a continuous aggregate are not supported.
+* Hierarchical continuous aggregates can be created on top of a continuous
+ aggregate with a `JOIN` clause, but cannot themselves have a `JOIN` clause.
+
+Given the following schema:
+
+See the following `JOIN` examples on continuous aggregates:
+
+- `INNER JOIN` on a single equality condition, using the `ON` clause:
+
+- `INNER JOIN` on a single equality condition, using the `ON` clause, with a further condition added in the `WHERE` clause:
+
+- `INNER JOIN` on a single equality condition specified in `WHERE` clause:
+
+- `INNER JOIN` on multiple equality conditions:
+
+TimescaleDB v2.16.x and higher.
+
+- `INNER JOIN` with a single equality condition specified in `WHERE` clause can be combined with further conditions in the `WHERE` clause:
+
+TimescaleDB v2.16.x and higher.
+
+- `INNER JOIN` between a hypertable and multiple Postgres tables:
+
+TimescaleDB v2.16.x and higher.
+
+- `LEFT JOIN` between a hypertable and a Postgres table:
+
+TimescaleDB v2.16.x and higher.
+
+- `LATERAL JOIN` between a hypertable and a subquery:
+
+TimescaleDB v2.16.x and higher.
+
+In TimescaleDB v2.7 and later, continuous aggregates support all Postgres
+aggregate functions. This includes both parallelizable aggregates, such as `SUM`
+and `AVG`, and non-parallelizable aggregates, such as `RANK`.
+
+In TimescaleDB v2.10.0 and later, the `FROM` clause supports `JOINS`, with
+some restrictions. For more information, see the [`JOIN` support section][caggs-joins].
+
+In older versions of TimescaleDB, continuous aggregates only support
+[aggregate functions that can be parallelized by Postgres][postgres-parallel-agg].
+You can work around this by aggregating the other parts of your query in the
+continuous aggregate, then
+[using the window function to query the aggregate][cagg-window-functions].
+
+The following table summarizes the aggregate functions supported in continuous aggregates:
+
+| Function, clause, or feature |TimescaleDB 2.6 and earlier|TimescaleDB 2.7, 2.8, and 2.9|TimescaleDB 2.10 and later|
+|------------------------------------------------------------|-|-|-|
+| Parallelizable aggregate functions |✅|✅|✅|
+| [Non-parallelizable SQL aggregates][postgres-parallel-agg] |❌|✅|✅|
+| `ORDER BY` |❌|✅|✅|
+| Ordered-set aggregates |❌|✅|✅|
+| Hypothetical-set aggregates |❌|✅|✅|
+| `DISTINCT` in aggregate functions |❌|✅|✅|
+| `FILTER` in aggregate functions |❌|✅|✅|
+| `FROM` clause supports `JOINS` |❌|❌|✅|
+
+DISTINCT works in aggregate functions, not in the query definition. For example, for the table:
+
+- The following works:
+
+- This does not:
+
+If you want the old behavior in later versions of TimescaleDB, set the
+`timescaledb.finalized` parameter to `false` when you create your continuous
+aggregate.
+
+## Components of a continuous aggregate
+
+Continuous aggregates consist of:
+
+* Materialization hypertable to store the aggregated data in
+* Materialization engine to aggregate data from the raw, underlying, table to
+ the materialization hypertable
+* Invalidation engine to determine when data needs to be re-materialized, due
+ to changes in the data
+* Query engine to access the aggregated data
+
+### Materialization hypertable
+
+Continuous aggregates take raw data from the original hypertable, aggregate it,
+and store the aggregated data in a materialization hypertable. When you query
+the continuous aggregate view, the aggregated data is returned to you as needed.
+
+Using the same temperature example, the materialization table looks like this:
+
+|day|location|chunk|avg temperature|
+|-|-|-|-|
+|2021/01/01|New York|1|73|
+|2021/01/01|Stockholm|1|70|
+|2021/01/02|New York|2||
+|2021/01/02|Stockholm|2|69|
+
+The materialization table is stored as a TimescaleDB hypertable, to take
+advantage of the scaling and query optimizations that hypertables offer.
+Materialization tables contain a column for each group-by clause in the query,
+and an `aggregate` column for each aggregate in the query.
+
+For more information, see [materialization hypertables][cagg-mat-hypertables].
+
+### Materialization engine
+
+The materialization engine performs two transactions. The first transaction
+blocks all INSERTs, UPDATEs, and DELETEs, determines the time range to
+materialize, and updates the invalidation threshold. The second transaction
+unblocks other transactions, and materializes the aggregates. The first
+transaction is very quick, and most of the work happens during the second
+transaction, to ensure that the work does not interfere with other operations.
+
+### Invalidation engine
+
+Any change to the data in a hypertable could potentially invalidate some
+materialized rows. The invalidation engine checks to ensure that the system does
+not become swamped with invalidations.
+
+Fortunately, time-series data means that nearly all INSERTs and UPDATEs have a
+recent timestamp, so the invalidation engine does not materialize all the data,
+but to a set point in time called the materialization threshold. This threshold
+is set so that the vast majority of INSERTs contain more recent timestamps.
+These data points have never been materialized by the continuous aggregate, so
+there is no additional work needed to notify the continuous aggregate that they
+have been added. When the materializer next runs, it is responsible for
+determining how much new data can be materialized without invalidating the
+continuous aggregate. It then materializes the more recent data and moves the
+materialization threshold forward in time. This ensures that the threshold lags
+behind the point-in-time where data changes are common, and that most INSERTs do
+not require any extra writes.
+
+When data older than the invalidation threshold is changed, the maximum and
+minimum timestamps of the changed rows is logged, and the values are used to
+determine which rows in the aggregation table need to be recalculated. This
+logging does cause some write load, but because the threshold lags behind the
+area of data that is currently changing, the writes are small and rare.
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/continuous-aggregates/time/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+CREATE TABLE locations (
+ id TEXT PRIMARY KEY,
+ name TEXT
+);
+
+CREATE TABLE devices (
+ id SERIAL PRIMARY KEY,
+ location_id TEXT,
+ name TEXT
+);
+
+CREATE TABLE conditions (
+ "time" TIMESTAMPTZ,
+ device_id INTEGER,
+ temperature FLOAT8
+) WITH (
+ tsdb.hypertable,
+ tsdb.partition_column='time'
+);
+```
+
+Example 2 (sql):
+```sql
+CREATE MATERIALIZED VIEW conditions_by_day WITH (timescaledb.continuous) AS
+ SELECT time_bucket('1 day', time) AS bucket, devices.name, MIN(temperature), MAX(temperature)
+ FROM conditions
+ JOIN devices ON devices.id = conditions.device_id
+ GROUP BY bucket, devices.name
+ WITH NO DATA;
+```
+
+Example 3 (sql):
+```sql
+CREATE MATERIALIZED VIEW conditions_by_day WITH (timescaledb.continuous) AS
+ SELECT time_bucket('1 day', time) AS bucket, devices.name, MIN(temperature), MAX(temperature)
+ FROM conditions
+ JOIN devices ON devices.id = conditions.device_id
+ WHERE devices.location_id = 'location123'
+ GROUP BY bucket, devices.name
+ WITH NO DATA;
+```
+
+Example 4 (sql):
+```sql
+CREATE MATERIALIZED VIEW conditions_by_day WITH (timescaledb.continuous) AS
+ SELECT time_bucket('1 day', time) AS bucket, devices.name, MIN(temperature), MAX(temperature)
+ FROM conditions, devices
+ WHERE devices.id = conditions.device_id
+ GROUP BY bucket, devices.name
+ WITH NO DATA;
+```
+
+---
+
+## Continuous aggregates
+
+**URL:** llms-txt#continuous-aggregates
+
+In modern applications, data usually grows very quickly. This means that aggregating
+it into useful summaries can become very slow. If you are collecting data very frequently, you might want to aggregate your
+data into minutes or hours instead. For example, if an IoT device takes
+temperature readings every second, you might want to find the average temperature
+for each hour. Every time you run this query, the database needs to scan the
+entire table and recalculate the average. TimescaleDB makes aggregating data lightning fast, accurate, and easy with continuous aggregates.
+
+
+
+Continuous aggregates in TimescaleDB are a kind of hypertable that is refreshed automatically
+in the background as new data is added, or old data is modified. Changes to your
+dataset are tracked, and the hypertable behind the continuous aggregate is
+automatically updated in the background.
+
+Continuous aggregates have a much lower maintenance burden than regular Postgres materialized
+views, because the whole view is not created from scratch on each refresh. This
+means that you can get on with working your data instead of maintaining your
+database.
+
+Because continuous aggregates are based on hypertables, you can query them in exactly the same way as your other tables. This includes continuous aggregates in the rowstore, compressed into the [columnstore][hypercore],
+or [tiered to object storage][data-tiering]. You can even create [continuous aggregates on top of your continuous aggregates][hierarchical-caggs], for an even more fine-tuned aggregation.
+
+[Real-time aggregation][real-time-aggregation] enables you to combine pre-aggregated data from the materialized view with the most recent raw data. This gives you up-to-date results on every query. In TimescaleDB v2.13 and later, real-time aggregates are **DISABLED** by default. In earlier versions, real-time aggregates are **ENABLED** by default; when you create a continuous aggregate, queries to that view include the results from the most recent raw data.
+
+For more information about using continuous aggregates, see the documentation in [Use Tiger Data products][cagg-docs].
+
+===== PAGE: https://docs.tigerdata.com/api/data-retention/ =====
+
+---
+
+## refresh_continuous_aggregate()
+
+**URL:** llms-txt#refresh_continuous_aggregate()
+
+**Contents:**
+- Samples
+- Required arguments
+- Optional arguments
+
+Refresh all buckets of a continuous aggregate in the refresh window given by
+`window_start` and `window_end`.
+
+A continuous aggregate materializes aggregates in time buckets. For example,
+min, max, average over 1 day worth of data, and is determined by the `time_bucket`
+interval. Therefore, when
+refreshing the continuous aggregate, only buckets that completely fit within the
+refresh window are refreshed. In other words, it is not possible to compute the
+aggregate over, for an incomplete bucket. Therefore, any buckets that do not
+fit within the given refresh window are excluded.
+
+The function expects the window parameter values to have a time type that is
+compatible with the continuous aggregate's time bucket expression—for
+example, if the time bucket is specified in `TIMESTAMP WITH TIME ZONE`, then the
+start and end time should be a date or timestamp type. Note that a continuous
+aggregate using the `TIMESTAMP WITH TIME ZONE` type aligns with the UTC time
+zone, so, if `window_start` and `window_end` is specified in the local time
+zone, any time zone shift relative UTC needs to be accounted for when refreshing
+to align with bucket boundaries.
+
+To improve performance for continuous aggregate refresh, see
+[CREATE MATERIALIZED VIEW ][create_materialized_view].
+
+Refresh the continuous aggregate `conditions` between `2020-01-01` and
+`2020-02-01` exclusive.
+
+Alternatively, incrementally refresh the continuous aggregate `conditions`
+between `2020-01-01` and `2020-02-01` exclusive, working in `12h` intervals:
+
+Force the `conditions` continuous aggregate to refresh between `2020-01-01` and
+`2020-02-01` exclusive, even if the data has already been refreshed.
+
+## Required arguments
+
+|Name|Type|Description|
+|-|-|-|
+|`continuous_aggregate`|REGCLASS|The continuous aggregate to refresh.|
+|`window_start`|INTERVAL, TIMESTAMPTZ, INTEGER|Start of the window to refresh, has to be before `window_end`.|
+|`window_end`|INTERVAL, TIMESTAMPTZ, INTEGER|End of the window to refresh, has to be after `window_start`.|
+
+You must specify the `window_start` and `window_end` parameters differently,
+depending on the type of the time column of the hypertable. For hypertables with
+`TIMESTAMP`, `TIMESTAMPTZ`, and `DATE` time columns, set the refresh window as
+an `INTERVAL` type. For hypertables with integer-based timestamps, set the
+refresh window as an `INTEGER` type.
+
+A `NULL` value for `window_start` is equivalent to the lowest changed element
+in the raw hypertable of the CAgg. A `NULL` value for `window_end` is
+equivalent to the largest changed element in raw hypertable of the CAgg. As
+changed element tracking is performed after the initial CAgg refresh, running
+CAgg refresh without `window_start` and `window_end` covers the entire time
+range.
+
+Note that it's not guaranteed that all buckets will be updated: refreshes will
+not take place when buckets are materialized with no data changes or with
+changes that only occurred in the secondary table used in the JOIN.
+
+## Optional arguments
+
+|Name|Type| Description |
+|-|-|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `force` | BOOLEAN | Force refresh every bucket in the time range between `window_start` and `window_end`, even when the bucket has already been refreshed. This can be very expensive when a lot of data is refreshed. Default is `FALSE`. |
+| `refresh_newest_first` | BOOLEAN | Set to `FALSE` to refresh the oldest data first. Default is `TRUE`. |
+
+===== PAGE: https://docs.tigerdata.com/api/continuous-aggregates/remove_policies/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+CALL refresh_continuous_aggregate('conditions', '2020-01-01', '2020-02-01');
+```
+
+Example 2 (sql):
+```sql
+DO
+$$
+DECLARE
+ refresh_interval INTERVAL = '12h'::INTERVAL;
+ start_timestamp TIMESTAMPTZ = '2020-01-01T00:00:00Z';
+ end_timestamp TIMESTAMPTZ = start_timestamp + refresh_interval;
+BEGIN
+ WHILE start_timestamp < '2020-02-01T00:00:00Z' LOOP
+ CALL refresh_continuous_aggregate('conditions', start_timestamp, end_timestamp);
+ COMMIT;
+ RAISE NOTICE 'finished with timestamp %', end_timestamp;
+ start_timestamp = end_timestamp;
+ end_timestamp = end_timestamp + refresh_interval;
+ END LOOP;
+END
+$$;
+```
+
+Example 3 (sql):
+```sql
+CALL refresh_continuous_aggregate('conditions', '2020-01-01', '2020-02-01', force => TRUE);
+```
+
+---
+
+## DROP MATERIALIZED VIEW (Continuous Aggregate)
+
+**URL:** llms-txt#drop-materialized-view-(continuous-aggregate)
+
+**Contents:**
+- Samples
+- Parameters
+
+Continuous aggregate views can be dropped using the `DROP MATERIALIZED VIEW` statement.
+
+This statement deletes the continuous aggregate and all its internal
+objects. It also removes refresh policies for that
+aggregate. To delete other dependent objects, such as a view
+defined on the continuous aggregate, add the `CASCADE`
+option. Dropping a continuous aggregate does not affect the data in
+the underlying hypertable from which the continuous aggregate is
+derived.
+
+Drop existing continuous aggregate.
+
+|Name|Type|Description|
+|---|---|---|
+| `` | TEXT | Name (optionally schema-qualified) of continuous aggregate view to be dropped.|
+
+===== PAGE: https://docs.tigerdata.com/api/continuous-aggregates/remove_all_policies/ =====
+
+**Examples:**
+
+Example 1 (unknown):
+```unknown
+## Samples
+
+Drop existing continuous aggregate.
+```
+
+---
+
+## Migrate a continuous aggregate to the new form
+
+**URL:** llms-txt#migrate-a-continuous-aggregate-to-the-new-form
+
+**Contents:**
+- Configure continuous aggregate migration
+- Check on continuous aggregate migration status
+- Troubleshooting
+ - Permissions error when migrating a continuous aggregate
+
+In TimescaleDB v2.7 and later, continuous aggregates use a new format that
+improves performance and makes them compatible with more SQL queries. Continuous
+aggregates created in older versions of TimescaleDB, or created in a new version
+with the option `timescaledb.finalized` set to `false`, use the old format.
+
+To migrate a continuous aggregate from the old format to the new format, you can
+use this procedure. It automatically copies over your data and policies. You can
+continue to use the continuous aggregate while the migration is happening.
+
+Connect to your database and run:
+
+There are known issues with `cagg_migrate()` in version 2.8.0.
+Upgrade to version 2.8.1 or later before using it.
+
+## Configure continuous aggregate migration
+
+The migration procedure provides two boolean configuration parameters,
+`override` and `drop_old`. By default, the name of your new continuous
+aggregate is the name of your old continuous aggregate, with the suffix `_new`.
+
+Set `override` to true to rename your new continuous aggregate with the
+original name. The old continuous aggregate is renamed with the suffix `_old`.
+
+To both rename and drop the old continuous aggregate entirely, set both
+parameters to true. Note that `drop_old` must be used together with
+`override`.
+
+## Check on continuous aggregate migration status
+
+To check the progress of the continuous aggregate migration, query the migration
+planning table:
+
+### Permissions error when migrating a continuous aggregate
+
+You might get a permissions error when migrating a continuous aggregate from old
+to new format using `cagg_migrate`. The user performing the migration must have
+the following permissions:
+
+* Select, insert, and update permissions on the tables
+ `_timescale_catalog.continuous_agg_migrate_plan` and
+ `_timescale_catalog.continuous_agg_migrate_plan_step`
+* Usage permissions on the sequence
+ `_timescaledb_catalog.continuous_agg_migrate_plan_step_step_id_seq`
+
+To solve the problem, change to a user capable of granting permissions, and
+grant the following permissions to the user performing the migration:
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/continuous-aggregates/compression-on-continuous-aggregates/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+CALL cagg_migrate('');
+```
+
+Example 2 (sql):
+```sql
+SELECT * FROM _timescaledb_catalog.continuous_agg_migrate_plan_step;
+```
+
+Example 3 (sql):
+```sql
+GRANT SELECT, INSERT, UPDATE ON TABLE _timescaledb_catalog.continuous_agg_migrate_plan TO ;
+GRANT SELECT, INSERT, UPDATE ON TABLE _timescaledb_catalog.continuous_agg_migrate_plan_step TO ;
+GRANT USAGE ON SEQUENCE _timescaledb_catalog.continuous_agg_migrate_plan_step_step_id_seq TO ;
+```
+
+---
+
+## Refresh continuous aggregates
+
+**URL:** llms-txt#refresh-continuous-aggregates
+
+**Contents:**
+- Prerequisites
+- Change the refresh policy
+- Add concurrent refresh policies
+- Manually refresh a continuous aggregate
+
+Continuous aggregates can have a range of different refresh policies. In
+addition to refreshing the continuous aggregate automatically using a policy,
+you can also refresh it manually.
+
+To follow the procedure on this page you need to:
+
+* Create a [target Tiger Cloud service][create-service].
+
+This procedure also works for [self-hosted TimescaleDB][enable-timescaledb].
+
+## Change the refresh policy
+
+Continuous aggregates require a policy for automatic refreshing. You can adjust
+this to suit different use cases. For example, you can have the continuous
+aggregate and the hypertable stay in sync, even when data is removed from the
+hypertable. Alternatively, you could keep source data in the continuous aggregate even after
+it is removed from the hypertable.
+
+You can change the way your continuous aggregate is refreshed by calling
+`add_continuous_aggregate_policy`.
+
+Among others, `add_continuous_aggregate_policy` takes the following arguments:
+
+* `start_offset`: the start of the refresh window relative to when the policy
+ runs
+* `end_offset`: the end of the refresh window relative to when the policy runs
+* `schedule_interval`: the refresh interval in minutes or hours. Defaults to
+ 24 hours.
+
+- If you set the `start_offset` or `end_offset` to `NULL`, the range is open-ended and extends to the beginning or end of time.
+- If you set `end_offset` within the current time bucket, this bucket is excluded from materialization. This is done for the following reasons:
+
+- The current bucket is incomplete and can't be refreshed.
+ - The current bucket gets a lot of writes in the timestamp order, and its aggregate becomes outdated very quickly. Excluding it improves performance.
+
+To include the latest raw data in queries, enable [real-time aggregation][future-watermark].
+
+See the [API reference][api-reference] for the full list of required and optional arguments and use examples.
+
+The policy in the following example ensures that all data in the continuous aggregate is up to date with the hypertable, except for data written within the last hour of wall-clock time. The policy also does not refresh the last time bucket of the continuous aggregate.
+
+Since the policy in this example runs once every hour (`schedule_interval`) while also excluding data within the most recent hour (`end_offset`), it takes up to 2 hours for data written to the hypertable to be reflected in the continuous aggregate. Backfills, which are usually outside the most recent hour of data, will be visible after up to 1 hour depending on when the policy last ran when the data was written.
+
+Because it has an open-ended `start_offset` parameter, any data that is removed
+from the table, for example with a `DELETE` or with `drop_chunks`, is also removed
+from the continuous aggregate view. This means that the continuous aggregate
+always reflects the data in the underlying hypertable.
+
+To changing a refresh policy to use a `NULL` `start_offset`:
+
+1. **Connect to your Tiger Cloud service**
+
+In [Tiger Cloud Console][services-portal] open an [SQL editor][in-console-editors]. You can also connect to your service using [psql][connect-using-psql].
+
+1. Create a new policy on `conditions_summary_hourly` that keeps the continuous aggregate up to date, and runs every hour:
+
+If you want to keep data in the continuous aggregate even if it is removed from
+the underlying hypertable, you can set the `start_offset` to match the
+[data retention policy][sec-data-retention] on the source hypertable. For example,
+if you have a retention policy that removes data older than one month, set
+`start_offset` to one month or less. This sets your policy so that it does not
+refresh the dropped data.
+
+1. Connect to your Tiger Cloud service.
+
+In [Tiger Cloud Console][services-portal] open an [SQL editor][in-console-editors]. You can also connect to your service using [psql][connect-using-psql].
+
+1. Create a new policy on `conditions_summary_hourly`
+ that keeps data removed from the hypertable in the continuous aggregate, and
+ runs every hour:
+
+It is important to consider your data retention policies when you're setting up
+continuous aggregate policies. If the continuous aggregate policy window covers
+data that is removed by the data retention policy, the data will be removed when
+the aggregates for those buckets are refreshed. For example, if you have a data
+retention policy that removes all data older than two weeks, the continuous
+aggregate policy will only have data for the last two weeks.
+
+## Add concurrent refresh policies
+
+You can add concurrent refresh policies on each continuous aggregate, as long as their
+start and end offsets don't overlap. For example, to backfill data into older chunks you
+set up one policy that refreshes recent data, and another that refreshes backfilled data.
+
+The first policy in this example is keeps the continuous aggregate up to date with data that was
+inserted in the past day. Any data that was inserted or updated for previous days is refreshed by
+the second policy.
+
+1. Connect to your Tiger Cloud service.
+
+In [Tiger Cloud Console][services-portal] open an [SQL editor][in-console-editors]. You can also connect to your service using [psql][connect-using-psql].
+
+1. Create a new policy on `conditions_summary_daily`
+ to refresh the continuous aggregate with recently inserted data which runs
+ hourly:
+
+2. At the `psql` prompt, create a concurrent policy on
+ `conditions_summary_daily` to refresh the continuous aggregate with
+ backfilled data:
+
+## Manually refresh a continuous aggregate
+
+If you need to manually refresh a continuous aggregate, you can use the
+`refresh` command. This recomputes the data within the window that has changed
+in the underlying hypertable since the last refresh. Therefore, if only a few
+buckets need updating, the refresh runs quickly.
+
+If you have recently dropped data from a hypertable with a continuous aggregate,
+calling `refresh_continuous_aggregate` on a region containing dropped chunks
+recalculates the aggregate without the dropped data. See
+[drop data][cagg-drop-data] for more information.
+
+The `refresh` command takes three arguments:
+
+* The name of the continuous aggregate view to refresh
+* The timestamp of the beginning of the refresh window
+* The timestamp of the end of the refresh window
+
+Only buckets that are wholly within the specified range are refreshed. For
+example, if you specify `2021-05-01', '2021-06-01` the only buckets that are
+refreshed are those up to but not including 2021-06-01. It is possible to
+specify `NULL` in a manual refresh to get an open-ended range, but we do not
+recommend using it, because you could inadvertently materialize a large amount
+of data, slow down your performance, and have unintended consequences on other
+policies like data retention.
+
+To manually refresh a continuous aggregate, use the `refresh` command:
+
+Follow the logic used by automated refresh policies and avoid refreshing time buckets that are likely to have a lot of writes. This means that you should generally not refresh the latest incomplete time bucket. To include the latest raw data in your queries, use [real-time aggregation][real-time-aggregates] instead.
+
+===== PAGE: https://docs.tigerdata.com/use-timescale/continuous-aggregates/drop-data/ =====
+
+**Examples:**
+
+Example 1 (sql):
+```sql
+SELECT add_continuous_aggregate_policy('conditions_summary_hourly',
+ start_offset => NULL,
+ end_offset => INTERVAL '1 h',
+ schedule_interval => INTERVAL '1 h');
+```
+
+Example 2 (sql):
+```sql
+SELECT add_continuous_aggregate_policy('conditions_summary_hourly',
+ start_offset => INTERVAL '1 month',
+ end_offset => INTERVAL '1 h',
+ schedule_interval => INTERVAL '1 h');
+```
+
+Example 3 (sql):
+```sql
+SELECT add_continuous_aggregate_policy('conditions_summary_daily',
+ start_offset => INTERVAL '1 day',
+ end_offset => INTERVAL '1 h',
+ schedule_interval => INTERVAL '1 h');
+```
+
+Example 4 (sql):
+```sql
+SELECT add_continuous_aggregate_policy('conditions_summary_daily',
+ start_offset => NULL
+ end_offset => INTERVAL '1 day',
+ schedule_interval => INTERVAL '1 hour');
+```
+
+---
diff --git a/skills/timescaledb/references/getting_started.md b/skills/timescaledb/references/getting_started.md
new file mode 100644
index 0000000..d1b704d
--- /dev/null
+++ b/skills/timescaledb/references/getting_started.md
@@ -0,0 +1,2098 @@
+# Timescaledb - Getting Started
+
+**Pages:** 3
+
+---
+
+## Start coding with Tiger Data
+
+**URL:** llms-txt#start-coding-with-tiger-data
+
+Easily integrate your app with Tiger Cloud or self-hosted TimescaleDB. Use your favorite programming language to connect to your
+Tiger Cloud service, create and manage hypertables, then ingest and query data.
+
+---
+
+## "Quick Start: Ruby and TimescaleDB"
+
+**URL:** llms-txt#"quick-start:-ruby-and-timescaledb"
+
+**Contents:**
+- Prerequisites
+- Connect a Rails app to your service
+- Optimize time-series data in hypertables
+- Insert data your service
+- Reference
+ - Query scopes
+ - TimescaleDB features
+- Next steps
+- Load energy consumption data
+ - 6e. Enable policies that compress data in the target hypertable
+
+To follow the steps on this page:
+
+* Create a target [Tiger Cloud service][create-service] with the Real-time analytics capability.
+
+You need [your connection details][connection-info]. This procedure also
+ works for [self-hosted TimescaleDB][enable-timescaledb].
+
+* Install [Rails][rails-guide].
+
+## Connect a Rails app to your service
+
+Every Tiger Cloud service is a 100% Postgres database hosted in Tiger Cloud with
+Tiger Data extensions such as TimescaleDB. You connect to your Tiger Cloud service
+from a standard Rails app configured for Postgres.
+
+1. **Create a new Rails app configured for Postgres**
+
+Rails creates and bundles your app, then installs the standard Postgres Gems.
+
+1. **Install the TimescaleDB gem**
+
+1. Open `Gemfile`, add the following line, then save your changes:
+
+1. In Terminal, run the following command:
+
+1. **Connect your app to your Tiger Cloud service**
+
+1. In `/config/database.yml` update the configuration to read securely connect to your Tiger Cloud service
+ by adding `url: <%= ENV['DATABASE_URL'] %>` to the default configuration:
+
+1. Set the environment variable for `DATABASE_URL` to the value of `Service URL` from
+ your [connection details][connection-info]
+
+1. Create the database:
+ - **Tiger Cloud**: nothing to do. The database is part of your Tiger Cloud service.
+ - **Self-hosted TimescaleDB**, create the database for the project:
+
+1. Verify the connection from your app to your Tiger Cloud service:
+
+The result shows the list of extensions in your Tiger Cloud service
+
+| Name | Version | Schema | Description |
+ | -- | -- | -- | -- |
+ | pg_buffercache | 1.5 | public | examine the shared buffer cache|
+ | pg_stat_statements | 1.11 | public | track planning and execution statistics of all SQL statements executed|
+ | plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language|
+ | postgres_fdw | 1.1 | public | foreign-data wrapper for remote Postgres servers|
+ | timescaledb | 2.18.1 | public | Enables scalable inserts and complex queries for time-series data (Community Edition)|
+ | timescaledb_toolkit | 1.19.0 | public | Library of analytical hyperfunctions, time-series pipelining, and other SQL utilities|
+
+## Optimize time-series data in hypertables
+
+Hypertables are Postgres tables designed to simplify and accelerate data analysis. Anything
+you can do with regular Postgres tables, you can do with hypertables - but much faster and more conveniently.
+
+In this section, you use the helpers in the TimescaleDB gem to create and manage a [hypertable][about-hypertables].
+
+1. **Generate a migration to create the page loads table**
+
+This creates the `/db/migrate/_create_page_loads.rb` migration file.
+
+1. **Add hypertable options**
+
+Replace the contents of `/db/migrate/_create_page_loads.rb`
+ with the following:
+
+The `id` column is not included in the table. This is because TimescaleDB requires that any `UNIQUE` or `PRIMARY KEY`
+ indexes on the table include all partitioning columns. In this case, this is the time column. A new
+ Rails model includes a `PRIMARY KEY` index for id by default: either remove the column or make sure that the index
+ includes time as part of a "composite key."
+
+For more information, check the Roby docs around [composite primary keys][rails-compostite-primary-keys].
+
+1. **Create a `PageLoad` model**
+
+Create a new file called `/app/models/page_load.rb` and add the following code:
+
+1. **Run the migration**
+
+## Insert data your service
+
+The TimescaleDB gem provides efficient ways to insert data into hypertables. This section
+shows you how to ingest test data into your hypertable.
+
+1. **Create a controller to handle page loads**
+
+Create a new file called `/app/controllers/application_controller.rb` and add the following code:
+
+1. **Generate some test data**
+
+Use `bin/console` to join a Rails console session and run the following code
+ to define some random page load access data:
+
+1. **Insert the generated data into your Tiger Cloud service**
+
+1. **Validate the test data in your Tiger Cloud service**
+
+This section lists the most common tasks you might perform with the TimescaleDB gem.
+
+The TimescaleDB gem provides several convenient scopes for querying your time-series data.
+
+- Built-in time-based scopes:
+
+- Browser-specific scopes:
+
+- Query continuous aggregates:
+
+This query fetches the average and standard deviation from the performance stats for the `/products` path over the last day.
+
+### TimescaleDB features
+
+The TimescaleDB gem provides utility methods to access hypertable and chunk information. Every model that uses
+the `acts_as_hypertable` method has access to these methods.
+
+#### Access hypertable and chunk information
+
+- View chunk or hypertable information:
+
+- Compress/Decompress chunks:
+
+#### Access hypertable stats
+
+You collect hypertable stats using methods that provide insights into your hypertable's structure, size, and compression
+status:
+
+- Get basic hypertable information:
+
+- Get detailed size information:
+
+#### Continuous aggregates
+
+The `continuous_aggregates` method generates a class for each continuous aggregate.
+
+- Get all the continuous aggregate classes:
+
+- Manually refresh a continuous aggregate:
+
+- Create or drop a continuous aggregate:
+
+Create or drop all the continuous aggregates in the proper order to build them hierarchically. See more about how it
+ works in this [blog post][ruby-blog-post].
+
+Now that you have integrated the ruby gem into your app:
+
+* Learn more about the [TimescaleDB gem](https://github.com/timescale/timescaledb-ruby).
+* Check out the [official docs](https://timescale.github.io/timescaledb-ruby/).
+* Follow the [LTTB][LTTB], [Open AI long-term storage][open-ai-tutorial], and [candlesticks][candlesticks] tutorials.
+
+===== PAGE: https://docs.tigerdata.com/_partials/_add-data-energy/ =====
+
+## Load energy consumption data
+
+When you have your database set up, you can load the energy consumption data
+into the `metrics` hypertable.
+
+This is a large dataset, so it might take a long time, depending on your network
+connection.
+
+1. Download the dataset:
+
+[metrics.csv.gz](https://assets.timescale.com/docs/downloads/metrics.csv.gz)
+
+1. Use your file manager to decompress the downloaded dataset, and take a note
+ of the path to the `metrics.csv` file.
+
+1. At the psql prompt, copy the data from the `metrics.csv` file into
+ your hypertable. Make sure you point to the correct path, if it is not in
+ your current working directory:
+
+1. You can check that the data has been copied successfully with this command:
+
+You should get five records that look like this:
+
+===== PAGE: https://docs.tigerdata.com/_partials/_migrate_dual_write_dump_database_roles/ =====
+
+Tiger Cloud services do not support roles with superuser access. If your SQL
+dump includes roles that have such permissions, you'll need to modify the file
+to be compliant with the security model.
+
+You can use the following `sed` command to remove unsupported statements and
+permissions from your roles.sql file:
+
+This command works only with the GNU implementation of sed (sometimes referred
+to as gsed). For the BSD implementation (the default on macOS), you need to
+add an extra argument to change the `-i` flag to `-i ''`.
+
+To check the sed version, you can use the command `sed --version`. While the
+GNU version explicitly identifies itself as GNU, the BSD version of sed
+generally doesn't provide a straightforward --version flag and simply outputs
+an "illegal option" error.
+
+A brief explanation of this script is:
+
+- `CREATE ROLE "postgres"`; and `ALTER ROLE "postgres"`: These statements are
+ removed because they require superuser access, which is not supported
+ by Timescale.
+
+- `(NO)SUPERUSER` | `(NO)REPLICATION` | `(NO)BYPASSRLS`: These are permissions
+ that require superuser access.
+
+- `GRANTED BY role_specification`: The GRANTED BY clause can also have permissions that
+ require superuser access and should therefore be removed. Note: according to the
+ TimescaleDB documentation, the GRANTOR in the GRANTED BY clause must be the
+ current user, and this clause mainly serves the purpose of SQL compatibility.
+ Therefore, it's safe to remove it.
+
+===== PAGE: https://docs.tigerdata.com/_partials/_install-self-hosted-debian-based-start/ =====
+
+1. **Install the latest Postgres packages**
+
+1. **Run the Postgres package setup script**
+
+===== PAGE: https://docs.tigerdata.com/_partials/_free-plan-beta/ =====
+
+The Free pricing plan and services are currently in beta.
+
+===== PAGE: https://docs.tigerdata.com/_partials/_livesync-configure-source-database/ =====
+
+1. **Tune the Write Ahead Log (WAL) on the Postgres source database**
+
+* [GUC “wal_level” as “logical”](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-LEVEL)
+ * [GUC “max_wal_senders” as 10](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-WAL-SENDERS)
+ * [GUC “wal_sender_timeout” as 0](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-WAL-SENDER-TIMEOUT)
+
+This will require a restart of the Postgres source database.
+
+1. **Create a user for the connector and assign permissions**
+
+1. Create ``:
+
+You can use an existing user. However, you must ensure that the user has the following permissions.
+
+1. Grant permissions to create a replication slot:
+
+1. Grant permissions to create a publication:
+
+1. Assign the user permissions on the source database:
+
+If the tables you are syncing are not in the `public` schema, grant the user permissions for each schema you are syncing:
+
+1. On each table you want to sync, make `` the owner:
+
+You can skip this step if the replicating user is already the owner of the tables.
+
+1. **Enable replication `DELETE` and`UPDATE` operations**
+
+Replica identity assists data replication by identifying the rows being modified. Your options are that
+ each table and hypertable in the source database should either have:
+- **A primary key**: data replication defaults to the primary key of the table being replicated.
+ Nothing to do.
+- **A viable unique index**: each table has a unique, non-partial, non-deferrable index that includes only columns
+ marked as `NOT NULL`. If a UNIQUE index does not exist, create one to assist the migration. You can delete if after
+ migration.
+
+For each table, set `REPLICA IDENTITY` to the viable unique index:
+
+- **No primary key or viable unique index**: use brute force.
+
+For each table, set `REPLICA IDENTITY` to `FULL`:
+
+ For each `UPDATE` or `DELETE` statement, Postgres reads the whole table to find all matching rows. This results
+ in significantly slower replication. If you are expecting a large number of `UPDATE` or `DELETE` operations on the table,
+ best practice is to not use `FULL`.
+
+===== PAGE: https://docs.tigerdata.com/_partials/_datadog-data-exporter/ =====
+
+1. **In Tiger Cloud Console, open [Exporters][console-integrations]**
+1. **Click `New exporter`**
+1. **Select `Metrics` for `Data type` and `Datadog` for provider**
+
+
+
+1. **Choose your AWS region and provide the API key**
+
+The AWS region must be the same for your Tiger Cloud exporter and the Datadog provider.
+
+1. **Set `Site` to your Datadog region, then click `Create exporter`**
+
+===== PAGE: https://docs.tigerdata.com/_partials/_migrate_dual_write_6e_turn_on_compression_policies/ =====
+
+### 6e. Enable policies that compress data in the target hypertable
+
+In the following command, replace `` with the fully qualified table
+name of the target hypertable, for example `public.metrics`:
+
+===== PAGE: https://docs.tigerdata.com/_partials/_install-self-hosted-redhat-rocky/ =====
+
+1. **Install TimescaleDB**
+
+To avoid errors, **do not** install TimescaleDB Apache 2 Edition and TimescaleDB Community Edition at the same time.
+
+1. **Initialize the Postgres instance**
+
+1. **Tune your Postgres instance for TimescaleDB**
+
+This script is included with the `timescaledb-tools` package when you install TimescaleDB.
+ For more information, see [configuration][config].
+
+1. **Enable and start Postgres**
+
+1. **Log in to Postgres as `postgres`**
+
+You are now in the psql shell.
+
+1. **Set the password for `postgres`**
+
+When you have set the password, type `\q` to exit psql.
+
+===== PAGE: https://docs.tigerdata.com/_partials/_cloud-mst-restart-workers/ =====
+
+On Tiger Cloud and Managed Service for TimescaleDB, restart background workers by doing one of the following:
+
+* Run `SELECT timescaledb_pre_restore()`, followed by `SELECT
+ timescaledb_post_restore()`.
+* Power the service off and on again. This might cause a downtime of a few
+ minutes while the service restores from backup and replays the write-ahead
+ log.
+
+===== PAGE: https://docs.tigerdata.com/_partials/_migrate_live_setup_enable_replication/ =====
+
+Replica identity assists data replication by identifying the rows being modified. Your options are that
+ each table and hypertable in the source database should either have:
+- **A primary key**: data replication defaults to the primary key of the table being replicated.
+ Nothing to do.
+- **A viable unique index**: each table has a unique, non-partial, non-deferrable index that includes only columns
+ marked as `NOT NULL`. If a UNIQUE index does not exist, create one to assist the migration. You can delete if after
+ migration.
+
+For each table, set `REPLICA IDENTITY` to the viable unique index:
+
+- **No primary key or viable unique index**: use brute force.
+
+For each table, set `REPLICA IDENTITY` to `FULL`:
+
+ For each `UPDATE` or `DELETE` statement, Postgres reads the whole table to find all matching rows. This results
+ in significantly slower replication. If you are expecting a large number of `UPDATE` or `DELETE` operations on the table,
+ best practice is to not use `FULL`.
+
+===== PAGE: https://docs.tigerdata.com/_partials/_timescale-cloud-platforms/ =====
+
+You use Tiger Data's open-source products to create your best app from the comfort of your own developer environment.
+
+See the [available services][available-services] and [supported systems][supported-systems].
+
+### Available services
+
+Tiger Data offers the following services for your self-hosted installations:
+
+
+