2025-03-16 17:00 はてなブックマーク テクノロジー 人気エントリー

タイトル ブックマーク数

全ビジネスマンが使えるClaude3.7 sonnet と draw.ioで始める図の作成。|遠藤巧巳 – JapanMarketing合同会社
1064
3.7sonnetで「アジャイルの開発サイクルをdrawioでわかりやすく表現するためのxmlを書いて」でdrawioこちらも表現できたw🙌 https://t.co/lVhpEP78fR pic.twitter.com/t2acT96pZZ 私のXでも多くの方にいいね、リポストされました。広告で知るところから、契約して、満足度が高まり、広げるまでの全て [プロンプト]draw.ioで使えるXMLを書いてSlackのチャットボットでメッセージが送られる画面イメージ。今回チャットボットを作るため、顧客に提案資料として掲載したい [プロンプト]draw.ioで使えるXMLを書いて。

新宿警察署の代表番号から不審電話詐欺事件を眺めてみる
371
例えば米国では、連邦議会警察(USCP)の電話番号が詐欺目的で偽装表示され、「逮捕状を取り下げるため」などと金銭や個人情報を詐取しようとする事件が報告されています (U.S. Capitol Police Phone Numbers Used in "Caller ID Spoofing" Scam | United States Capitol Police)。 https://rocket-boys.co.jp/security-measures-lab/shinjuku-police-station-spoofed-number-scam-call/ https://whoscall.com/ja/blog/articles/1337-電話スプーフィングの脅威!詐欺対策と効果的な防御法 https://www.kaspersky.co.jp/resource-center/preemptive-safety/phone-number-spoofing 発信者番号スプーフィングとは、電話の発信者番号表示に出る情報を改ざんし、本来とは異なる番号・発信元名を表示させる技術です。 これは海外からVoIP等で発信し、日本国内の電話番号を偽装表示するもので、東洋経済オンラインによれば「存在しない国番号」を使った巧妙な詐欺電話も報じられています(※「+1(200)…」のように本来存在しない番号で偽装する事例 (非通知着信や見知らぬ国際電話「+〇〇」に注意!その危険性と …))。

AWS設計プロンプト
252
本アーキテクチャは、99.9%以上の可用性とRPO 30分・RTO 2時間を満たすためにマルチAZ構成を基本とし、セキュリティ面では個人情報保護法や医療情報の取り扱いに準拠するための暗号化・アクセス制御を重点的に考慮しています。 ※以下を全量使用すると量が多いので、該当箇所のみの抜粋を推奨 以下、chatgptの出力例 以下に示すアーキテクチャ提案は、AWS Well-Architected Frameworkの5つの柱(運用上の優秀性、セキュリティ、信頼性、パフォーマンス効率、コスト最適化)に基づき、ビジネス要件・非機能要件・技術的制約・予算制約を踏まえて詳細に設計しています。シンプルかつ網羅的なAWS設計を生成するAIプロンプトの核心は: 構造化された出力フォーマット:設計書の章立てと各セクションの説明内容を明確に指定
具体的なパラメータ要求:抽象的な説明ではなく、実装に使える具体的な設定値を求める
選定理由の明確化:「なぜその選択をしたのか」の説明を求める
代替案との比較:検討した代替オプションとの比較を含める
Well-Architectedの原則適用:AWSのベストプラクティスに基づく設計を促す このアプローチを活用すれば、AIの力を借りつつも、実装に直結する高品質なAWS設計書を効率的に作成できます。

TypeScript で MCP サーバーを実装し、Claude Desktop から利用する
169
リソース:MCP サーバーがクライアントに提供するデータ(ファイル内容、データベースレコードなど) ツール:外部システムとのインタラクションを可能にするアクション(ファイルの操作、計算の実行など) プロンプト:特定のタスク実行のためのプロンプト(コードレビューの方法など) MCP サーバーは上記の機能を name, description, arguments などのプロパティを持つ JSON オブジェクトとして定義します。例えば Google Calendar の MCP サーバーを利用すれば、旅行の計画を立てる際に既存の予定を考慮した計画を提案し、さらにその新しい予定を直接 Google Calendar に登録することも可能になるでしょう。 Note GitHub の MCP サーバーを利用するには、アクセスしたいリポジトリの権限を持つ Personal Access Token を事前に作成し、<YOUR_TOKEN> の部分に置き換えてください。

AIは「思考している」のか、それとも「思考しているよう見せかけている」だけなのか?
196
・関連記事
「推論モデルがユーザーにバレないように不正する現象」の検出手法をOpenAIが開発 – GIGAZINE

学生に対して「AIを禁止」するのではなく「AIスキルと批判的思考を鍛える」ためエストニア政府がOpenAIやAnthropicと提携してプロジェクト「AI Leap」をスタート – GIGAZINE

AIにプログラミングさせる時に幻覚が発生しても大した問題にはならないという主張 – GIGAZINE

AIはチェスで負けそうになるとチートする – GIGAZINE

OpenAIが新モデル「o1-preview」の思考内容を出力させようとしたユーザーに警告 – GIGAZINE

なぜ大規模言語モデル(LLM)はだまされやすいのか? – GIGAZINE ・関連コンテンツ << 次の記事 中国がアメリカを上回ってロボット産業で優位性を確保できる要因とは? 前の記事 >> 日本最長のつり橋からバンジージャンプするところを撮影に行ったら強風でイベントが途中で終わっちゃったよレポート 2025年03月15日 21時00分00秒 in ネットサービス, Posted by log1e_dh You can read the machine translated English article Does AI 'think' or does it just 'appear …. 直近24時間(1時間ごとに更新。

From OpenAI to DeepSeek, companies say AI can “reason” now. Is it true? | Vox

OpenAI o1やDeepSeek r1などの大規模言語モデルは、大きな問題を小さな問題に分解し段階的に解決する「思考連鎖推論」によって、複雑な論理的思考力を発揮しています。

OpenAIが新モデル「o1-preview」の思考内容を出力させようとしたユーザーに警告 – GIGAZINE ChatGPTのような古いモデルは、人間が書いた文章から学習してそれを模倣した文章を出力するのに対し、OpenAI o1のような新しいモデルは、人間が文章を書くためのプロセスを学習しており、より自然で思考に基づいたような文章を出力できます。


Gemma 3やQwQなどでローカルLLMがそろそろ使い物になってきた – きしだのHatena
253
LangChainとLangGraphによるRAG・AIエージェント[実践]入門 (エンジニア選書)作者:西見 公宏,吉田 真吾,大嶋 勇樹技術評論社Amazon LangChainとLangGraphによるRAG・AIエージェント[実践]入門 (エンジニア選書) nowokay
2025-03-15 16:47

読者になる 楽しく仲よく
Twitter: きしだൠ @kis プロになるJava―仕事で必要なプログラミングの知識がゼロから身につく最高の指南書作者:きしだ なおき,山本 裕介,杉山 貴章技術評論社Amazon「プロになるJava」関連記事 プロになるJava―仕事で必要なプログラミングの知識がゼロから身につく最高の指南書 nowokayさんは、はてなブログを使っています。

と、ここまで書いたところで27Bの2bit量子化を作ってる人がいたので試してみました。GoogleからGemma 3が出たり、AlibabaがQwQを出したりで、27Bや32BでDeepSeek V3の671Bに匹敵すると言っていて、小さいサイズや2bit量子化でも実際結構賢いので、普通の人がもってるPCでもローカルLLMが実用的に使える感じになってきています。


エンジニア同士の図解コミュニケーション技術 – Qiita
117
スペース管理: 参加型図解: 保存のテクニック: 少人数での打ち合わせやペアプログラミング時のノート共有: A3用紙の活用:広いスペースで全体像を把握しやすく T字型レイアウト:上部に共通認識、左右に各自の考えを記入 スケッチノートの技法: ノートの共有回転:互いのノートを90度回転させて相手から見やすい向きで共有 図解の質を高めるための基本原則: エンジニアのコミュニケーションで頻出する図解パターン: 階層構造:親子関係、包含関係の表現に最適 フローチャート:プロセスやアルゴリズムの表現に ER図:データ構造やリレーションの説明に マインドマップ:アイデア出しや関連概念の整理に 四象限マトリックス:優先度や分類の整理に 情報の重要度を視覚的に表現するテクニック: 図解は複雑さを減らすためのものであるため、シンプルさが重要: チームメンバー間の知識や理解の差を埋めるための図解テクニック: 問題状況:
チーム内で新規APIの仕様について認識の食い違いがあり、実装が進まなかった 図解による解決: 結果:
30分の図解セッションで2日間の議論が解決し、実装の方向性が一致 問題状況:
分散チームで既存システムのアーキテクチャ刷新を検討中、各自の描くビジョンにズレがあった 図解による解決: 結果:
チーム全体の方向性が統一され、具体的なマイルストーンが決定 問題状況:
新しく参加したエンジニアに複雑なレガシーシステムの全体像を効率よく伝える必要があった 図解による解決: 結果:
通常3ヶ月かかる習熟が1ヶ月で達成され、新メンバーが早期に貢献開始 図解コミュニケーションは、エンジニア同士の認識のギャップを埋め、複雑な概念を効率的に共有するための強力なツールです。Go to list of users who liked Delete article Deleted articles cannot be recovered. Draft of this article would be also deleted. Are you sure you want to delete this article? エンジニアにとって、複雑な概念や構造を正確かつ効率的に伝えることは日常業務の重要な部分です。 Go to list of users who liked Go to list of comments Register as a new user and use Qiita more conveniently Go to list of users who liked Delete article Deleted articles cannot be recovered. Draft of this article would be also deleted. Are you sure you want to delete this article? How developers code is here. Guide & Help Contents Official Accounts Our service Company。

Momochidori | Adobe Fonts
41
Momochidori is an Adobe Originals typeface with humorous and retro-looking letter forms designed by Ryoko Nishizuka, Adobeʼs Principal Designer. The three aspect ratios: Condensed, square, and Wide, each with five stroke weight variations, offer groundbreaking support for both horizontal and vertical writing modes. This font family does not contain Variable Fonts, but instead has 15 separate variations. (The Momochidori variable font is also available, from which you can select font instances from continuous ranges of stroke weight and aspect ratio.). The Adobe Originals program started in 1989 as an in-house type foundry at Adobe, brought together to create original typefaces of exemplary design quality, technical fidelity, and aesthetic longevity. Today the Type team’s mission is to make sophisticated and even experimental typefaces that explore the possibilities of design and technology. Typefaces released as Adobe Originals are the result of years of work and study, regarded as industry standards for the ambition and quality of their development. Visit foundry page You may encounter slight variations in the name of this font, depending on where you use it. Here’s what to look for. In application font menus, this font will display: To use this font on your website, use the following CSS: Fonts in the Adobe Fonts library include support for many different languages, OpenType features, and typographic styles. Learn more about language support Learn more about OpenType features Language: © 2025 Adobe. All rights reserved.。

技術を育てる組織・組織を育てる技術 / technology and organization
192
SpeakerDeck Copyright © 2025 Speaker Deck, LLC. All slide content and descriptions are owned by their creators.。技育祭2025春の講演資料です。

RailsでCQRS/ESをやってみたきづき
44
さらに、イベント登録とクエリの更新を同じトランザクションで行わないことや、ActiveSupport::Notificationsを活用した同期的なPub/Subの実装方法についても解説されています。ただし、いきなりライブラリを使うのではなく、まずはシンプルな同期処理のスクリプトを作成し、基本的な流れを理解することが推奨されています。 実装の具体例として、「Rails Event Store」を活用する方法が紹介されています。

MCPサーバーで開発効率が3倍に!2025年必須の10大ツール – Qiita
266
各カテゴリーのMCPサーバーは、単独でも強力ですが、組み合わせることでさらに大きな効果を発揮します: 開発効率の最大化 コミュニケーションの円滑化 インフラストラクチャの最適化 MCPサーバーは、以下の3つの観点で開発の未来を変えていきます: 時間の節約 精度の向上 スケーラビリティ ニーズの特定 段階的な導入 ベストプラクティス 2025年、MCPサーバーとApidogを組み合わせることで、開発者の夢が現実になります。「百聞は一見にしかず」ですから、実際に使ってみると、その便利さに驚くはずです! 10個のMCPサーバーを機能と用途に基づいて、以下の4つのカテゴリーに分類しました: 開発者向けTips: Apidogを使えば、GitHubのAPIエンドポイントを簡単にテストできます。ぜひ、あなたも今日からMCPサーバーの旅を始めてみてください! Go to list of users who liked Go to list of comments Register as a new user and use Qiita more conveniently Go to list of users who liked Delete article Deleted articles cannot be recovered. Draft of this article would be also deleted. Are you sure you want to delete this article? How developers code is here. Guide & Help Contents Official Accounts Our service Company。

「MCP?聞いたことあるけど使ってない…😅」人向けに初歩から少し踏み込んだ内容まで解説
86
ただいくつかはまった瞬間もあるのでトラブルシュートがてら書いておきます!
https://docs.cursor.com/context/model-context-protocol Cursor Settings > Features > MCP に移動して 「+ Add New MCP Server」をクリックする LLMが使えるMCPサーバーを登録する 夢が…夢が広がりますね… 大まかな流れはこんな感じで、アプリがMCPサーバーを通してツールを使い、その結果をAIモデルにわたし回答を生成しています。
Xなどで「〜なMCPサーバーを作った!」などを見かけた時は、
「標準化したルールでAI用のツールを作ったんだな!」と思ってもらえるといいと思います! ではMCPがあることでどんないいことがあるのでしょうか?
CursorやClineに めちゃくちゃ嬉しいですね…
「ただAIモデルにツールを使わせることができるってことが言いたいなら、今までFunctionCallingなどのAIモデルにツールを使わせる方法があったじゃないか!MCPのいいところじゃない!」
って反論があるかもですが、それは違います。
https://platform.openai.com/docs/guides/function-calling
MCPというのはAIモデルにツールを使わせるために標準化したルールを決めようというものでした。

MCP Servers
26
The largest collection of MCP Servers. An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process. AI image generation using various models Secure file operations with configurable access controls Web and local search using Brave's Search API Repository management, file operations, and GitHub API integration summarize chat message MCP server for interacting with Neon Management API and databases It's like v0 but in your Cursor/WindSurf/Cline. 21st dev Magic MCP server for working with your frontend like Magic Channel management and messaging capabilities Browser automation and web scraping Deploy, configure & interrogate your resources on the Cloudflare developer platform (e.g. Workers/KV/R2/D1) New version of cherry studio is supporting MCP
Release v1.1.1 ?? CherryHQ/cherry-studio
Version 1.1.1
HyperChat is a Chat client that strives for openness, utilizing APIs from various LLMs to achieve the best Chat experience, as well as implementing productivity tools through the MCP protocol. ⏩ Create, share, and use custom AI code assistants with our open-source IDE extensions and hub of models, rules, prompts, docs, and other building blocks Roo Code (prev. Roo Cline) gives you a whole dev team of AI agents in your code editor. 5ire is a cross-platform desktop AI assistant, MCP client. It compatible with major service providers, supports local knowledge base and tools via model context protocol servers . Code at the speed of thought – Zed is a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter. ChatMCP is an AI chat client implementing the Model Context Protocol (MCP). Autonomous coding agent right in your IDE, capable of creating/editing files, executing commands, using the browser, and more with your permission every step of the way. Frequently Asked Questions about MCP Servers MCP is an open-source protocol developed by Anthropic that enables AI systems like Claude to securely connect with various data sources. It provides a universal standard for AI assistants to access external data, tools, and prompts through a client-server architecture. MCP Servers are systems that provide context, tools, and prompts to AI clients. They can expose data sources like files, documents, databases, and API integrations, allowing AI assistants to access real-time information in a secure way. MCP Servers work through a simple client-server architecture. They expose data and tools through a standardized protocol, maintaining secure 1:1 connections with clients inside host applications like Claude Desktop. MCP Servers can share resources (files, docs, data), expose tools (API integrations, actions), and provide prompts (templated interactions). They control their own resources and maintain clear system boundaries for security. Claude can connect to MCP servers to access external data sources and tools, enhancing its capabilities with real-time information. Currently, this works with local MCP servers, with enterprise remote server support coming soon. Yes, security is built into the MCP protocol. Servers control their own resources, there's no need to share API keys with LLM providers, and the system maintains clear boundaries. Each server manages its own authentication and access control. mcp.so is a community-driven platform that collects and organizes third-party MCP Servers. It serves as a central directory where users can discover, share, and learn about various MCP Servers available for AI applications. You can submit your MCP Server by creating a new issue in our GitHub repository. Click the 'Submit' button in the navigation bar or visit our GitHub issues page directly. Please provide details about your server including its name, description, features, and connection information. MCP Servers The largest collection of MCP Servers, featuring Awesome MCP Servers and Claude MCP integration. Resources Community Friends © 2025 • mcp.so All rights reserved.build with ShipAny。

Amazon Aurora now supports R8g database instances in additional AWS Regions – AWS
5
AWS Graviton4-based R8g database instances are now generally available for Amazon Aurora with PostgreSQL compatibility and Amazon Aurora with MySQL compatibility in Europe (Ireland), Europe (Spain), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo) regions. R8g instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. Graviton4-based instances provide up to a 40% performance improvement and up to 29% price/performance improvement for on-demand pricing over Graviton3-based instances of equivalent sizes on Amazon Aurora databases, depending on database engine, version, and workload. AWS Graviton4 processors are the latest generation of custom-designed AWS Graviton processors built on the AWS Nitro System. R8g DB instances are available with new 24xlarge and 48xlarge sizes. With these new sizes, R8g DB instances offer up to 192 vCPU, up to 50Gbps enhanced networking bandwidth, and up to 40Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). You can spin up Gravitona4 R8g database instances in the Amazon RDS Management Console or using the AWS CLI. Upgrading a database instance to Graviton4 requires a simple instance type modification. For more details, refer to the Aurora documentation. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.。

IO devices and latency — PlanetScale
41
Blog|Engineering Want to learn more about unlimited IOPS w/ Metal, Vitess, horizontal sharding, or Enterprise options? Talk to Solutions Get the RSS feed By Benjamin Dicken | March 13, 2025 Non-volatile storage is a cornerstone of modern computer systems. Every modern photo, email, bank balance, medical record, and other critical pieces of data are kept on digital storage devices, often replicated many times over for added durability. Non-volatile storage, or colloquially just "disk", can store binary data even when the computer it is attached to is powered off. Computers have other forms of volatile storage such as CPU registers, CPU cache, and random-access memory, all of which are faster but require continuous power to function. Here, we're going to cover the history, functionality, and performance of non-volatile storage devices over the history of computing, all using fun and interactive visual elements. This blog is written in celebration of our latest product release: PlanetScale Metal. Metal uses locally attached NVMe drives to run your cloud database, as opposed to the slower and less consistent network-attached storage used by most cloud database providers. This results in a blazing fast queries, low latency, and unlimited IOPS. Check out the docs to learn more. As early as the 1950s, computers were using tape drives for non-volatile digital storage. Tape storage systems have been produced in many form factors over the years, ranging from ones that take up an entire room to small drives that can fit in your pocket, such as the iconic Sony Walkman. A tape Reader is a box containing hardware specifically designed for reading tape cartridges. Tape cartridges are inserted and then unwound which causes the tape to move over the IO Head, which can read and write data. Though tape started being used to store digital information over 70 years ago, it is still in use today for certain applications. A standard LTO tape cartridge has several-hundred meters of 0.5 inch wide tape. The tape has several tracks running along its length, each track being further divided up into many small cells. A single tape cartridge contains many trillions of cells. Each cell can have its magnetic polarization set to up or down, corresponding to a binary 0 or 1. Technically, the magnetic field created by the transition between two cells is what makes the 1 or zero. A long sequence of bits on a tape forms a page of data. In the visualization of the tape reader, we simplify this by showing the tape as a simple sequence of data pages, rather than showing individual bits. When a tape needs to be read, it is loaded into a reader, sometimes by hand and sometimes by robot. The reader then spins the cartridge with its motor and uses the reader head to read off the binary values as the tape passes underneath. Give this a try with the (greatly slowed down) interactive visualization below. You can control the speed of the tape if you'd like it faster or slower. You can also issue read requests and write requests and then monitor how long these take. You'll also be able to see the queue of pending IO operations pop up in the top-left corner. Try issuing a few requests to get a feel for how tape storage works: If you spend enough time with this, you will notice that: Even with modern tape systems, reading data that is far away on a tape can take 10s of seconds, because it may need to spin the tape by hundreds of meters to reach the desired data. Let's compare two more specific, interactive examples to illustrate this further. Say we need to read a total of 4 pages and write an additional 4 pages worth of data. In the first scenario, all 4 pages we need to read are in a neat sequence, and the 4 to write to are immediately after the reads. You can see the IO operations queued up in the white container on the top-left. Go ahead and click the Time IO button to see this in action, and observe the time it takes to complete. As you can see, it takes somewhere around 3-4 seconds. On a real system, with an IO head that can operate much faster and motors that can drive the spools more quickly, it would be much faster. Now consider another scenario where we need to read and write the same number of pages. However, these reads and writes are spread out throughout the tape. Go ahead and click the Time IO button again. That took ~7x longer for the same total number of reads and writes! Imagine if this system was being used to load your social media feed or your email inbox. It might take 10s of seconds or even a full minute to display. This would be totally unacceptable. Though the latency for random reads and writes is poor, tape systems operate quite well when reading or writing data in long sequences. In fact, tape storage still has many such use cases today in the modern tech world. Tape is particularly well-suited for situations where there is a need for massive amounts of storage that does not need to be read frequently, but needs to be safely stored. This is because tape is both cheaper per-gigabyte and has a longer shelf-life than its competition: solid state drives and hard disk drives. For example, CERN has a tape storage data warehouse with over 400 petabytes of data under management. AWS also offers tape archiving as a service. What tape is not well suited for is high-traffic transactional databases. For these and many other high-performance tasks, other storage mediums are needed. The next major breakthrough in storage technology was the hard disk drive. Instead of storing binary data on a tape, we store them on a small circular metal disk known as the Platter. This disk is placed inside of an enclosure with a special read/write head, and spins very fast (7200 RPM is common, for example). Like the tape, this disk is also divided into tracks. However, the tracks are circular, and a single disk will often have well over 100,000 tracks. Each track contains hundreds of thousands of pages, and each page containing 4k (or so) of data. An HDD requires a mechanical spinning motion of both the reader and the platter to bring the data to the correct location for reading. One advantage of HDD over tape is that the entire surface area of the bits is available 100% of the time. It still takes time to move the needle + spin the disk to the correct location for a read or write, but it does not need to be "uncovered" like it needs to be for a tape. This combined with the fact that there are two different things that can spin, means data can be read and written with much lower latency. A typical random read can be performed in 1-3 milliseconds. Below is an interactive hard drive. You can control the speed of the platter if you'd like it faster or slower. You can request that the hard drive read a page and write to a nearby available page. If you request a read or write before the previous one is complete, a queue will be built up, and the disk will process the requests in the order it receives them. As before, you'll also be able to see the queue of pending IO operations in the white IO queue box. As with the tape, the speed of the platter spin has been slowed down by orders of magnitude to make it easier to see what's going on. In real disks, there would also be many more tracks and sectors, enough to store multiple terabytes of data in some cases. Let's again consider a few specific scenarios to see how the order of reads and writes affects latency. Say we need to write a total of three pages of data and then read 3 pages afterward. The three writes will happen on nearby available pages, and the reads will be from tracks 1, 4, and 3. Go ahead and click the Time IO button. You'll see the requests hit the queue, the reads and writes get fulfilled, and then the total time at the end. Due to the sequential nature of most of these operations, all the tasks were able to complete quickly. Now consider the same set of 6 reads and writes, but with them being interleaved in a different order. Go ahead and click the Time IO button again. If you had the patience to wait until the end, you should notice how the same total number of reads and writes took much longer. A lot of time was spent waiting for the platter to spin into the correct place under the read head. Magnetic disks have supported command queueing directly on the disks for a long time (80s with SCSI, 2000s with SATA). Because of this, the OS can issue multiple commands that run in parallel and potentially out-of-order, similar to SSDs. Magnetic disks also improve their performance if they can build up a queue of operations that the disk controller can then schedule reads and writes to optimize for the geometry of the disk. Here's a visualization to help us see the difference between the latency of a random tape read compared to a random disk read. A random tape read will often take multiple seconds (I put 1 second here to be generous) and a disk head seek takes closer to 2 milliseconds (one thousandth of a second) Even though HDDs are an improvement over tape, they are still "slow" in some scenarios, especially random reads and writes. The next big breakthrough, and currently the most common storage format for transactional databases, are SSDs. Solid State Storage, or "flash" storage, was invented in the 1980s. It was around even while tape and hard disk drives dominated the commercial and consumer storage spaces. It didn't become mainstream for consumer storage until the 2000s due to technological limitations and cost. The advantage of SSDs over both tape and disk is that they do not rely on any mechanical components to read data. All data is read, written, and erased electronically using a special type of non-volatile transistor known as NAND flash. This means that each 1 or 0 can be read or written without the need to move any physical components, but 100% through electrical signaling. SSDs are organized into one or more targets, each of which contains many blocks which each contain some number of pages. SSDs read and write data at the page level, meaning they can only read or write full pages at a time. In the SSD below, you can see reads and writes happening via the lines between the controller and targets (also called "traces"). The removal of mechanical components reduces the latency between when a request is made and when the drive can fulfill the request. There is no more waiting around for something to spin. We're showing small examples in the visual to make it easier to follow along, but a single SSD is capable of storing multiple terabytes of data. For example, say each page holds 4096 bits of data (4k). Now, say each block stores 16k pages, each target stores 16k blocks, and our device has 8 targets. This comes out to 4k * 16k * 16k * 8 = 8,796,093,022,208 bits, or 8 terabytes. We could increase the capacity of this drive by adding more targets or packing more pages in per block. Here's a visualization to help us see the difference between the latency of a random read on an HDD vs SSD. A random read on an SSD varies by model, but can execute as fast as 16μs (μs = microsecond, which is one millionth of a second). It would be tempting to think that with the removal of mechanical parts, the organization of data on an SSD no longer matters. Since we don't have to wait for things to spin, we can access any data at any location with perfect speed, right? Not quite. There are other factors that impact the performance of IO operations on an SSD. We won't cover them all here, but two that we will discuss are parallelism and garbage collection. Typically, each target has a dedicated line going from the control unit to the target. This line is what processes reads and writes, and only one page can be communicated by each line at a time. Pages can be communicated on these lines really fast, but it still does take a small slice of time. The organization of data and sequence of reads and writes has a significant impact on how efficiently these lines can be used. In the interactive SSD below, we have 4 targets and a set of 8 write operations queued up. You can click the Time IO button to see what happens when we can use the lines in parallel to get these pages written. In this case, we wrote 8 pages spread across the 4 targets. Because they were spread out, we were able to leverage parallelism to write 4 at a time in two time slices. Compare that with another sequence where the SSD writes all 8 pages to the same target. The SSD can only utilize a single data line for the writes. Again, hit the Time IO button to see the timing. Notice how only one line was used and it needed to write sequentially. All the other lines sat dormant. This demonstrates that the order in which we read and write data matters for performance. Many software engineers don't have to think about this on a day-to-day basis, but those designing software like MySQL need to pay careful attention to what structures data is being stored in and how data is laid out on disk. The minimum "chunk" of data that can be read from or written to an SSD is the size of a page. Even if you only need a subset of the data within, that is the unit that requests to the drive must be made in. Data can be read from a page any number of times. However, writes are a bit different. After a page is written to, it cannot be overwritten with new data until the old data has been explicitly erased. The tricky part is, individual pages cannot be erased. When you need to erase data, the entire block must be erased, and afterwards all of the pages within it can be reused. Each SSD needs to have an internal algorithm for managing which pages are empty, which are in use, and which are dirty. A dirty page is one that has been written to but the data is no longer needed and ready to be erased. Data also sometimes needs to be re-organized to allow for new write traffic. The algorithm that manages this is called the garbage collector. Let's see how this can have an impact by looking at another visualization. In the below SSD, all four of the targets are storing data. Some of the data is dirty, indicated by red text. We want to write 5 pages worth of data to this SSD. If we time this sequence of writes, the SSD can happily write them to free pages with no need for extra garbage collection. There are sufficient unused pages in the first target. Now say we have a drive with different data already on it, but we want to write those same 5 pages of data to it. In this drive, we only have 2 pages that are unused, but a number of dirty pages. In order to write 5 pages of data, the SSD will need to spend some time doing garbage collection to make room for the new data. When attempting to time another sequence of writes, some garbage collection will take place to make room for the data, slowing down the write. In this case, the drive had to move the two non-dirty pages from the top-left target to new locations. By doing this, it was able to make all of the pages on the top-left target dirty, making it safe to erase that data. This made room for the 5 new pages of data to be written. These additional steps significantly slowed down the performance of the write. This shows how the organization of data on the drive can have an impact on performance. When SSDs have a lot of reads, writes, and deletes, we can end up with SSDs that have degraded performance due to garbage collection. Though you may not be aware, busy SSDs do garbage collection tasks regularly, which can slow down other operations. These are just two of many reasons why the arrangement of data on a SSD affects its performance. The shift from tape, to disk, to solid state has allowed durable IO performance to accelerate dramatically over the past several decades. However, there is another phenomenon that has caused an additional shift in IO performance: moving to the cloud. Though there were companies offering cloud compute services before this, the mass move to cloud gained significant traction when Amazon AWS launched in 2006. Since that time, tens of thousands of companies have moved their app servers and database systems to their cloud and other similar services from Google, Microsoft, and others. Though there are many upsides to this trend, there are several downsides. One of these is that servers tend to have less permanence. Users rent (virtualised) servers on arbitrary hardware within gigantic data centers. These servers can get shut down at any time for a variety of reasons – hardware failure, hardware replacement, network disconnects, etc. When building platforms on rented cloud infrastructure, computer systems need to be able to tolerate more frequent failures at any moment. This, along with many engineers' desire for dynamically-scaleable storage volumes has led to a new sub-phenomenon: Separation of storage and compute. Traditionally, most servers, desktops, laptops, phones and other computing devices have their non-volatile storage directly attached. These are attached with SATA cables, PCIe interfaces, or even built directly into the same SOC as the RAM, CPU, and other components. This is great for speed, but provides the following challenges: For application servers, 1. and 2. are typically not a big deal since they work well in ephemeral environments by design. If one goes down, just spin up a new one. They also don't typically need much storage, as most of what they do happens in-memory. Databases are a different story. If a server goes down, we don't want to lose our data, and data size grows quickly, meaning we may hit storage limits. Partly due to this, many cloud providers allow you to spin up compute instances with a separately-configurable storage system attached over the network. In other words, using network-attached storage as the default. When you create a new server in EC2, the default is typically to attach an EBS network storage volume. Many database services including Amazon RDS, Amazon Aurora, Google Cloud SQL, and PlanetScale rely on these types of storage systems that have compute separated from storage over the network. This provides a nice advantage in the that the storage volume can be dynamically resized as data grows and shrinks. It also means that if a server goes down, the data is still safe, and can be re-attached to a different server. This simplicity has come at a cost, however. Consider the following simple configuration. In it, we have a server with a CPU, RAM, and direct-attached NVMe SSD. NVMe SSDs are a type of solid state disk that use the non-volatile memory host controller interface specification for blazing-fast IO speed and great bandwidth. In such a setup, the round trip from CPU to memory (RAM) takes about 100 nanoseconds (a nanosecond is 1 billionth of a second). A round trip from the CPU to a locally-attached NVMe SSD takes about 50,000 nanoseconds (50 microseconds). This makes it pretty clear that it's best to keep as much data in memory as possible for faster IO times. However, we still need disk because (A) memory is more expensive and (B) we need to store our data somewhere permanent. As slow as it may seem here, a locally-attached NVMe SSD is about as fast as it gets for modern storage. Let's compare this to the speed of a network-attached storage volume, such as EBS. Read and write requires a short network round trip within a data center. The round trip time is significantly worse, taking about 250,000 nanoseconds (250 microseconds, or 0.25 milliseconds). Using the same cutting-edge SSD now takes an order of magnitude longer to fulfill individual read and write requests. When we have large amounts of sequential IO, the negative impact of this can be reduced, but not eliminated. We have introduced significant latency deterioration for every time we need to hit our storage system. Another issue with network-attached storage in the cloud comes in the form of limiting IOPS. Many cloud providers that use this model, including AWS and Google Cloud, limit the amount of IO operations you can send over the wire. By default, a GP3 EBS instance on Amazon allows you to send 3000 IOPS per-second, with an additional pool that can be built up to allow for occasional bursts. The following visual shows how this works. Note that the burst balance size is smaller here than in reality to make it easier to see. If instead you have your storage attached directly to your compute instance, there are no artificial limits placed on IO operations. You can read and write as fast as the hardware will allow for. For as many steps as we've taken forward in IO performance over the years, this seems like a step in the wrong direction. This separation buys some nice conveniences, but at what cost to performance? How do we overcome issue 1 (data durability) and 2 (drive scalability) while keeping good IOPS performance? Issue 1 can be overcome with replication. Instead of relying on a single server to store all data, we can replicate it onto several computers. One common way of doing this is to have one server act as the primary, which will receive all write requests. Then 2 or more additional servers get all the data replicated to them. With the data in three places, the likelihood of losing data becomes very small. Let's look at concrete numbers. As a made up value, say in a given month, there is a 1% chance of a server failing. With a single server, this means we have a 1% chance of losing our data each month. This is an unacceptable for any serious business purpose. However, with three servers, this goes down to 1% × 1% × 1% = 0.0001% chance (1 in one million). At PlanetScale the protection is actually far stronger than even this, as we automatically detect and replace failed nodes in your cluster. We take frequent and reliable backups of the data in your database for added protection. Problem 2. can be solved, though it takes a bit more manual intervention when working with directly-attached SSDs. We need to ensure that we monitor and get alerted when our disk approaches capacity limits, and then have tools to easily increase capacity when needed. With such a feature, we can have data permanence, scalability, and blazing fast performance. This is exactly what PlanetScale has built with Metal. Planetscale just announced Metal, an industry-leading solution to this problem. With Metal, you get a full-fledged Vitess+MySQL cluster set up, with each MySQL instance running with a direct-attached NVMe SSD drive. Each Metal cluster comes with a primary and two replicas by default for extremely durable data. We allow you to resize your servers with larger drives with just a few clicks of a button when you run up against storage limits. Behind the scenes, we handle spinning up new nodes and migrating your data from your old instances to the new ones with zero downtime. Perhaps most importantly, with a Metal database, there is no artificial cap on IOPS. You can perform IO operations with minimal latency, and hammer it as hard as you want without being throttled or paying for expensive IOPS classes on your favorite cloud provider. If you want the ultimate in performance and scalability, try Metal today. Privacy | Terms | Cookies | Do Not Share My Personal Information © 2025 PlanetScale, Inc. All rights reserved. GitHub | X | LinkedIn | YouTube | Facebook。

1万円のポケットサイズコンピューター登場!プログラミングしよう【PicoCalc kit】
25
ぜひ↓からチャンネル登録をお願いします! シェアする デイリーガジェットをフォローする clockwork, PicoCalc kit
デイリーガジェット編集部 Teclastの格安Android 15タブレット4機種がセール!P30が10,900円ほか【桜·お花見セール】 iPhone 15 Plus中古が80,000円セール!3,500円のスマホ入りガチャもスタート メールアドレスが公開されることはありません。 年間300日アキバをウロつくガジェットメディア デイリーガジェットをフォローする 2025/3/14
UMPC(超小型ノートパソコン), アプリ/プログラミング シェアする ↓でお伝えしたようなユニークなコンピューターを多数リリースしているclockworkから、新たなポケットサイズコンピューター「PicoCalc kit」が登場しました。 デイリーガジェットでは、UMPC(超小型PC)、スマホ、タブレット、レトロPCをはじめとして、商品のレビューやインタビューの動画を、YouTubeに”ほぼ”毎日公開しています。

Windowsは内部的にどうやってインターネットへの接続状態を確認している?
47
ツイートする カテゴリートップへ PC PC PC PC PC PC PC PC PC PC PC PC PC ASCII倶楽部 ASCII倶楽部とは sponsored Backlogの効率的な運用や社外との連携をヘビーユーザーに聞いてみた sponsored 「便利だよ」「楽になるよ」じゃ人は動かない 華麗なる保守派の一族を変えるために sponsored 16型で1.28kgのスタンダードノートや、RTX 40シリーズ搭載ゲーミングノートなどをラインアップ sponsored sponsored 話題の光10ギガサービス、今がはじめどきの「5つの理由」 sponsored JAPANNEXTの「JN-IPS34G165UQ-HS」をレビュー sponsored 始めやすいは嘘じゃない SKYSEAクラウド版を情シスがレビュー sponsored sponsored JN-i55U-Uをレビュー sponsored 農作業効率化ソリューション「レポサク」を支える「MEEQ SIM」を紹介 sponsored 「きったん」と「あーけん」がkrewData活用をディープに語る sponsored ファーウェイ製品ならバッテリー長持ち&周りの音も聞こえて安心です sponsored 脅威の検知だけでなく対処までを自動化 「FortiEDR」で運用負荷の課題をクリア sponsored sponsored JN-IPS27Q4FL-HSPC9-DLをレビュー sponsored プランも機能差もたくさんあって迷ってしまうあなたのために sponsored Dropboxも自社で活用 営業から人事、総務まで幅広く使える最新ツール sponsored パソコンショップSEVENの「ZEFT Z55EU」について、中の人に聞いてきた sponsored sponsored MSI「Modern 14 F1M」レビュー sponsored JN-DMD-IPS156Fをレビュー sponsored 「なんとなく」でも割とどうにかなるのでご安心を! sponsored iiyama キャンパスPCの「STYLE-14FH124-i5-UC1X-CP25」をチェック sponsored JN-IPS238G180FHDをレビュー sponsored MSI「MPG B850 EDGE TI WIFI」レビュー sponsored AI学習向け/AI推論向けデータセンターとネットワーク、コンサルティングまでサービス提供 sponsored sponsored 大阪市内で2026年1月に運用開始する“コネクティビティデータセンター”、注目を浴びる背景やターゲットを聞く sponsored 31.5インチQD-OLEDパネルの4Kモデルに新色!MSI「MPG 321URXW QD-OLED」レビュー sponsored 表示形式: PC ⁄ スマホ。 http://www.msftconnecttest.com/connecttest.txt
http://ipv6.msftconnecttest.com/connecttest.txt
なお、NCSIでは、IPv4とIPv6のHTTPプローブは並列に実行され、どちらかが成功すれば、インターネット接続していると判断される。 Windows側に設定されたDNSサーバーで、DNSプローブ用ホスト名の名前解決が正しくできなかったら、Windows側のDNS設定に問題がある、あるいはキャプティブポータルがDNS名前解決を使ってログインページに誘導していると考えることができる。

The JavaScript Oxidation Compiler
10
Appearance Oxlint Beta Oxc Minifier Alpha Oxlint v0.10 Release Oxc Transformer Alpha Oxlint Import Plugin Alpha Oxlint General Availability BoshenProject Lead CamMember We are thrilled to announce that Oxlint is now in beta release, after more than a year of development by the community! This milestone represents a significant step forward in feature completeness, performance, and stability. At this stage, Oxlint can be used to fully replace ESLint in small to medium projects. For larger projects, our advice is to turn off ESLint rules via eslint-plugin-oxlint, and run Oxlint before ESLint in your local or CI setup for a quicker feedback loop. To test Oxlint in your codebase, you can use the package manager of your choice at the root of your codebase: For more detailed instructions on how to use Oxlint and integrate it with your project or editor, check out the installation guide. We have focused on making Oxlint more feature complete, supporting many of the most commonly used ESLint rules and plugins, but we have also made Oxlint much faster as well. The first generally available (GA) release of Oxlint had 205 rules in total, with 70 of those being enabled by default. This beta release now includes 502 rules in total, with 99 of those being enabled by default (a 41% increase in the number of rules enabled by default). Despite adding many new rules that are enabled by default, Oxlint is now much faster than it ever has been. Here are some benchmarks on some popular repositories: One of the most commonly requested features for Oxlint is support for existing custom ESLint plugins. We have been busy working on the prerequisites for this feature, and to enable fast linter plugins written in JavaScript. We hope to have this feature available for the next major release, and more information to share about it in the near future. We also are planning to continue improving the IDE/editor integrations, with improved support for VSCode, Zed, coc.nvim, and IntelliJ plugins. Oxlint beta would not have been possible without the over 200 contributors to the project. Special thanks goes to: Released under the MIT License. Copyright © 2023-present Boshen & Oxc Contributors。

【Xiaomi 14T】コスパ最強スマホ
10
安いスマホが欲しいけどゲームがや処理性能が速いスマホが欲しいと思ったことはある人は多いと思う。 ということで、2年ぐらいでも気軽に買い替えられるであろう5万~8万円で買えるおすすめスマホ3選紹介していく。 ということで、ゲームもできて性能も良いがとんでも無く安いスマホを2選紹介する [sc name="fukidasi"][/sc] この記事の見出し(クリックで該当箇所に飛ぶ) スマホの価格帯の目安 現在はスマホの種類が多いし、価格帯もまちまちだから、スマホの値段帯べつでどういう傾向のものが多いか …

イヤホン ガジェット
2025/3/1

【2025年初旬】1万円以下おすすめイヤホン4選【ワイヤレスイヤホン】

今やイヤホンといえばという存在にまで普及したワイヤレスイヤホン。


【プロンプトも公開】Zoom 会議が自動で議事録に!文字起こし × 生成 AI で業務大幅削減
15
そういった課題を解決するべく、Zoom の文字起こし作成をフックに、生成 AI を活用して要約を作成するワークフローを社内運用しているので、その仕組みについて紹介いたします! Zoom で録画し、文字起こしテキストが自動的に以下画像のように指定したフォーマットで Notion に出力されるようにしています。 情報の正確性と共有の円滑化
タイムスタンプ付きの詳細な記録と生成 AI による要約で、会議内容が漏れなく整理され、社内での情報共有がスムーズになりました。(この記事では Gemini を使用した実装例で紹介しています) 処理を継続する場合、生成 AI でマークダウン形式の議事録を生成します。

iPhone 16eで「Bluetooth接続したオーディオ機器の音声が途切れる問題」が多数報告される
8
・関連記事
iPhone 16eが2025年2月28日に発売、価格は税込99800円から&A18チップ搭載でApple Intelligenceに対応 – GIGAZINE

税込10万円以下の廉価版モデル「iPhone 16e」速攻フォトレビュー、歴代iPhone SEとサイズや重さも比較してみた – GIGAZINE

「iPhone 16e」の充電に必要な電源アダプタは?バッテリーの寿命・充電時間・ベンチマーク結果をチェック – GIGAZINE

48メガピクセルカメラ1つ搭載のiPhone 16eでどんな写真が撮れるのか?いろいろ撮影してみた – GIGAZINE

Appleが「iPhone 16eがMagSafe非対応なのは独自モデムのC1のせい」というウワサを否定 – GIGAZINE ・関連コンテンツ << 次の記事 日本最長420mのつり橋を歩いて渡ったりバンジージャンプしたりできる「GRAVITATE OSAKA」に一足先に行ってきたよレポート 前の記事 >> 「アルツハイマー病の患者は呼吸の回数が多い」という研究結果、アルツハイマー病の早期発見につながる可能性も 2025年03月15日 10時00分00秒 in モバイル,   ソフトウェア, Posted by logu_ii You can read the machine translated English article Many reports of 'Bluetooth audio interru…. 直近24時間(1時間ごとに更新。

iPhone 16e Has a Bluetooth Audio Problem – MacRumors
https://www.macrumors.com/2025/03/13/iphone-16e-bluetooth-audio-problem/

Some iPhone 16e owners are reporting Bluetooth audio issues that could be an iOS problem | TechRadar
https://www.techradar.com/phones/some-iphone-16e-owners-are-reporting-bluetooth-audio-issues-that-could-be-an-ios-problem

iPhone 16e users report Bluetooth audio problems
https://appleinsider.com/articles/25/03/14/iphone-16e-users-report-bluetooth-audio-problems

現地時間の2025年3月5日、iPhone 16eのBluetoothオーディオが途切れる問題がApple公式のサポートフォーラム上で報告されました。なので、GIGAZINEの物理的なサーバーたちを、たった1円でも良いので読者の皆さまに支援してもらえればとっても助かります!今すぐ1回払いの寄付は上のボタンから、毎月寄付はこちらのリンク先から!

・これまでGIGAZINEを支援してくれたメンバーのリスト ユーザー名 パスワード – パスワードの再発行 Appleが2025年2月28日に発売した「iPhone 16e」で、BluetoothスピーカーやBluetoothイヤホンに接続した際に、再生される音声が定期的に途切れる問題が多数報告されています。


無料で配信者向けのパタパタ時計・カウントダウンタイマー・ストップウォッチを簡単に表示するOBS用プラグイン「Flip Clock」
7
・関連記事
OBS上でウェブカメラの背景を自動で消してくれる無料プラグイン「obs-backgroundremoval」を使ってみた – GIGAZINE

配信ソフト「OBS」上でアプリごとに音声のオン・オフや音量調整が可能になるプラグイン「win-capture-audio」を導入してみた – GIGAZINE

ライブ配信ソフト「OBS Studio」でNVIDIAのノイズ除去フィルタが利用可能に、実際に使ってみるとこんな感じ – GIGAZINE

無料でオープンソースのライブ配信ソフト「OBS」で画面を録画&ビデオ会議で画面を簡単にキャプチャーする方法 – GIGAZINE

OBSに追加された「Hybrid MP4」は通常のMP4と何が違うのか? – GIGAZINE ・関連コンテンツ << 次の記事 日本最長のつり橋からバンジージャンプするところを撮影に行ったら強風でイベントが途中で終わっちゃったよレポート 前の記事 >> 心臓発作の患者の命を救った医師が「自分も心臓発作を起こしている」と気づいた事例 2025年03月15日 19時00分00秒 in レビュー,   ソフトウェア,   動画, Posted by log1i_yk You can read the machine translated English article 'Flip Clock' is a free plug-in for OBS t…. 直近24時間(1時間ごとに更新。

GitHub – PeterCang/flip-clock: Enhance your OBS streams with our versatile Clock Plugin! Display current time with customizable formats (AM/PM, seconds), countdowns, or a stopwatch – perfect for keeping your audience on time and engaged.
https://github.com/PeterCang/flip-clock

Flip Clock | OBS Forums
https://obsproject.com/forum/resources/flip-clock.2042/

Flip Clockでどんな時計が画面上に表示されるのかは、以下のムービーを見ると一発でわかります。なので、GIGAZINEの物理的なサーバーたちを、たった1円でも良いので読者の皆さまに支援してもらえればとっても助かります!今すぐ1回払いの寄付は上のボタンから、毎月寄付はこちらのリンク先から!

・これまでGIGAZINEを支援してくれたメンバーのリスト ユーザー名 パスワード – パスワードの再発行 YouTubeやTwitchなどでライブ配信を行う場合、オープンソースのライブ配信ツールのOBSを使えば配信画面のレイアウトを自由にカスタマイズできます。


GitHub – xpipe-io/xpipe: Your entire server infrastructure at your fingertips
5
We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. Access your entire server infrastructure from your local desktop XPipe is a new type of shell connection hub and remote file manager that allows you to access your entire server infrastructure from your local machine. It works on top of your installed command-line programs and does not require any setup on your remote systems. So if you normally use CLI tools like ssh, docker, kubectl, etc. to connect to your servers, you can just use XPipe on top of that. XPipe fully integrates with your tools such as your favourite text/code editors, terminals, shells, command-line tools and more. The platform is designed to be extensible, allowing anyone to add easily support for more tools or to implement custom functionality through a modular extension system. It currently supports: Note that this is a desktop application that should be run on your local desktop workstation, not on any server or containers. It will be able to connect to your server infrastructure from there. Installers are the easiest way to get started and come with an optional automatic update functionality: If you don't like installers, you can also use a portable version that is packaged as an archive: Alternatively, you can also use the following package managers: Installers are the easiest way to get started and come with an optional automatic update functionality: If you don't like installers, you can also use a portable version that is packaged as an archive: Alternatively, you can also use Homebrew to install XPipe with brew install –cask xpipe-io/tap/xpipe. You can install XPipe the fastest by pasting the installation command into your terminal. This will perform the setup automatically.
The script supports installation via apt, dnf, yum, zypper, rpm, and pacman on Linux: Of course, there are also other installation methods available. The following debian installers are available: Note that you should use apt to install the package with sudo apt install <file> as other package managers, for example dpkg,
are not able to resolve and install any dependency packages. The rpm releases are signed with the GPG key https://xpipe.io/signatures/crschnick.asc.
You can import it via rpm –import https://xpipe.io/signatures/crschnick.asc to allow your rpm-based package manager to verify the release signature. The following rpm installers are available: The same applies here, you should use a package manager that supports resolving and installing required dependencies if needed. There is an official AUR package available that you can either install manually or via an AUR helper such as with yay -S xpipe. There's an official xpipe nixpkg available that you can install with nix-env -iA nixos.xpipe. This one is however not always up to date. There is also a custom repository that contains the latest up-to-date releases: https://github.com/xpipe-io/nixpkg.
You can install XPipe by following the instructions in the linked repository. In case you prefer to use an archive version that you can extract anywhere, you can use these: Alternatively, there are also AppImages available: Note that the portable version assumes that you have some basic packages for graphical systems already installed
as it is not a perfect standalone version. It should however run on most systems. XPipe is a desktop application first and foremost. It requires a full desktop environment to function with various installed applications such as terminals, editors, shells, CLI tools, and more. So there is no true web-based interface for XPipe. Since it might make sense however to access your XPipe environment from the web, there is also a so-called webtop docker container image for XPipe. XPipe Webtop is a web-based desktop environment that can be run in a container and accessed from a browser via KasmVNC. The desktop environment comes with XPipe and various terminals and editors preinstalled and configured. XPipe follows an open core model, which essentially means that the main application is open source while certain other components are not. This mainly concerns the features only available in the homelab/professional plan and the shell handling library implementation. Furthermore, some CI pipelines and tests that run on private servers are also not included in the open repository. The distributed XPipe application consists out of two parts: Additional features are available in the homelab/professional plan . For more details see https://xpipe.io/pricing.
If your enterprise puts great emphasis on having access to the full source code, there are also full source-available enterprise options available. You can find the documentation at https://docs.xpipe.io. Access your entire server infrastructure from your local desktop。

そのドキュメント、ちゃんと息してる? ~ 使われ続ける“生きた”ドキュメントの育て方 ~
63

ドキュメントを作って終わりにせず、活用し続けるためのヒントを得たい方の参考になれば幸いです! “Is Your Documentation Alive?”
~ How to Keep It Relevant and Continuously Used ~ This LT explores the theme, “Is your document still breathing?”, focusing on how to prevent documentation from becoming outdated and ensuring it remains actively used.
It introduces four key steps to maintaining living documentation, along with practical and actionable methods.
If you want to go beyond just creating documents and learn how to keep them relevant and useful, this is for you! SpeakerDeck Copyright © 2025 Speaker Deck, LLC. All slide content and descriptions are owned by their creators.。
生きたドキュメントを育てるための4つのステップを紹介し、実践しやすい具体的な方法もあわせて紹介しています。このLTでは、「そのドキュメント、ちゃんと息してる?」をテーマに、ドキュメントが化石化せず、使われ続けるためのポイントについて解説します。


ヨドバシの新業態、日本酒テーマパーク「Yodobloom SAKE」
24
Impress Watch をフォローする Special Site 最新記事 高速道路、3連休の休日割引全撤廃 3月15日 10:00 動くホテル? ジャンボフェリーが「ナゾの宿泊プラン」を始めた理由 3月15日 09:15 from Impress NISA「成長投資枠」活用3ステップ “スゴイ株”と“悪い株”を見極める 3月15日 09:00 アシックス、歩行時のエネルギーロスを抑えた「ペダラ ライドウォーク 2」 3月15日 08:30 「マイナ免許証読み取りアプリ」公開 運転免許情報をスマホ・PCで確認 3月14日 20:01 ヨドバシの新業態、日本酒テーマパーク「Yodobloom SAKE」 3月14日 19:27 代々木公園新エリアに商業・交流施設「BE STAGE」 飲食店やニューバランス 3月14日 19:00 Lime、品川・大田区でサービス開始 都内17区へ拡大 3月14日 18:48 LIXIL、”威嚇”する防犯カメラ 警報サイレンとライト搭載 3月14日 16:01 iPhone搭載マイナカードはどう使われるのか? デジ庁がコンビニ活用テスト 3月14日 15:24 カードのタッチ決済が1日券 江ノ電ではじまる「Pass Case」で乗車+街の活性化 3月14日 13:51 シャープが宅配クリーニング 「プラズマクラスター」ルームで保管 3月14日 13:45 ミズノの寝具、マットレス「リフルSL550」 体圧分散・洗える・軽い 3月14日 12:56 鉄道インフラ点検にドローン活用 JR各社とスタートアップ3社が協定 3月14日 12:36 吉野家、ダチョウ肉の「オーストリッチ丼」を自宅で楽しめる冷凍食品 3月14日 12:14 アクセスランキング Impress Watchシリーズ 人気記事 おすすめ記事 「足立区のおいしい給食」 効果絶大で食べ残し7割減 モバイル免許証を見据えた「クルマウォレット連携」にみる未来の潮流 新MacBook AirとMac Studioの登場 性能アップから見える「生成AI」の時代 Skypeついにサービス終了 その歴史と「Teams」の課題 Androidはどこへいくのか AIをUIに組み込み、OSアップデート加速 ニュース 加藤綾 2025年3月14日 19:27 ヨドバシカメラは、新業態の日本酒テーマパーク「Yodobloom(ヨドブルーム) SAKE 梅田店」をマルチメディア梅田の2階に、4月5日にオープンする。 西武池袋のヨドバシ体験ストア「ヨドブルーム」21日オープン 2024年6月19日 西武池袋本店にヨドバシ 体験型ストア「ヨドブルーム」 2024年5月29日 「AppleCare+」4年プランを提供開始 ヨドバシとビック 2024年11月28日 「ヨドバシ千葉ビル」11月15日開業 千葉店が3倍に拡張 2024年11月1日 トップページに戻る Copyright ©2018Impress Corporation. All rights reserved.。 30分1,000円〜で、季節ごとに厳選された100種類の日本酒を試飲しながら、日本酒のソムリエである唎酒師がガイドしてくれる日本酒テーマパーク。

GitHub – crmne/ruby_llm: A delightful Ruby way to work with AI. No configuration madness, no complex callbacks, no handler hell – just beautiful, expressive Ruby code.
18
We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. A delightful Ruby way to work with AI. No configuration madness, no complex callbacks, no handler hell – just beautiful, expressive Ruby code. A delightful Ruby way to work with AI. No configuration madness, no complex callbacks, no handler hell – just beautiful, expressive Ruby code. 🤺 Battle tested at 💬 Chat with Work Every AI provider comes with its own client library, its own response format, its own conventions for streaming, and its own way of handling errors. Want to use multiple providers? Prepare to juggle incompatible APIs and bloated dependencies. RubyLLM fixes all that. One beautiful API for everything. One consistent format. Minimal dependencies — just Faraday and Zeitwerk. Because working with AI should be a joy, not a chore. Configure with your API keys: Check out the guides at https://rubyllm.com for deeper dives into conversations with tools, streaming responses, embedding generations, and more. Released under the MIT License. A delightful Ruby way to work with AI. No configuration madness, no complex callbacks, no handler hell – just beautiful, expressive Ruby code.。

Geminiの「Deep Research」が一部無料開放、ユーザーの趣味趣向に沿った回答生成も
24
検索 閉じる ケータイ Watch をフォローする Special Site アクセスランキング 最新記事 Google、セキュリティ意識向上などを目的に「Japan Cybersecurity Initiative」を設立 3月15日 09:00 連載DATAで見るケータイ業界 基地局ベンダーの現在地 3月15日 07:00 レビュー 「Xiaomi 15 Ultra」クイックフォトレビュー 3月15日 06:00 NTT法廃止は見送り「3年後を目処に再検討」、村上大臣が会見 サイトブロッキングは「慎重な検討が必要」と説明 3月15日 06:00 ワイモバイル、追加契約で「iPhone SE(第3世代)」を4万4640円に割引 3月14日 20:14 Geminiの「Deep Research」が一部無料開放、ユーザーの趣味趣向に沿った回答生成も 3月14日 20:14 シニアの「足の健康」を支持、楽天モバイルと科研製薬が提携 3月14日 19:27 au PAY マーケット、最大37%還元の「ポイント交換所 大還元祭」を開催 15日~21日まで 3月14日 15:35 ちいかわのスマホアプリ「ちいかわぽけっと」、27日に登場 3月14日 15:34 【セール情報】本日のPICK UP 「Apple Watch SE 40mm(第2世代)」がAmazonでセール 7%割引 3月14日 13:40 【セール情報】本日のPICK UP Amazonで「AirPods 4」、「AirPods Pro 2」などが最大で9%引き Amazonセール 3月14日 13:06 「Galaxy S25」シリーズ、AmazonやヨドバシカメラもSIMフリー版を取扱 3月14日 13:05 【セール情報】本日のPICK UP 「iPad mini(第6世代)」Wi-Fi+CellularモデルがAmazonでセール 3月14日 12:57 開発中の「Android 16」、3番目のベータ版が登場 3月14日 12:41 「Galaxy S25 Ultra」が最大1.4万円相当おトク、S24シリーズが20%割引など、サムスンがキャンペーン 3月14日 12:40 Impress Watchシリーズ 人気記事 おすすめ記事 初心者の方におくる、スマートウォッチを選ぶときに確認したい10のこと グーグル「Pixel 9 Pro」レビュー 「Pixel 9 Pro XL」とどちらを選ぶべき? iPhone 16シリーズ、買うならどのモデル? 各モデルの違いから選び方を考える ニュース 島田 純 2025年3月14日 20:14 Googleは、生成AI「Gemini」で検索履歴にもとづいたパーソナライズ機能の提供や、詳細を検索してレポート形式で出力する「Deep Research」を無料ユーザーでも利用可能にするなど、機能の拡充・サービス拡大を行った。 Gemini Advanced、過去の会話を活用した新機能追加 2025年2月15日 連載てっぱんアプリ!by日沼 諭史 欲しい情報を自然な会話でゲット~「Gemini Live」が超便利! 2024年11月22日 グーグルの最新生成AIモデル「Gemini 2.0 Flash」が提供開始 2025年1月31日 トップページに戻る Copyright ©2018Impress Corporation. All rights reserved.。 使い方は、モデルのドロップダウンメニューから、「Personalization(experimental)」を選択すると、GeminiアプリとGoogle検索履歴を接続するための確認ダイアログが表示され、「今すぐ接続」を選ぶと、GeminiがユーザーのGoogle検索履歴にアクセスできるようになる。

スピーカー音の反響を抑え臨場感味わう「オーディオルームパネル」クラファン開始
14
イヤカフ型入門におすすめ 小型化と性能のバランス極めた、500ml缶より小さいAnkerプロジェクタ「Capsule Air」の実力 約4万円で“生活クオリティが変わる音”、話題のADAM Audio「D3V」を仕事机に導入した ニュース 山崎健太郎 2025年3月15日 17:00 オフィス家具の製造・販売を行なっている林製作所は、Makuakeにおいて「オーディオルームパネル 3連セット/4連セット(GP-03/GP-04)」のクラウドファンディングを開始した。14日から 3月14日 19:00 デノン、SACD「DCD-3000NE」か、アンプ「PMA-3000NE」購入でAudioQuestケーブルプレゼント 3月14日 17:30 Polk Audio「Reserve/Signature Elite」スピーカー購入でAudioQuestケーブルプレゼント 3月14日 17:00 Kinera Imperial、神々のラグナロクをモチーフにした全11ドライバ搭載イヤフォン「Thorking」 3月14日 16:33 B&W「700 S3シリーズ」購入で、AudioQuestのスピーカーケーブルプレゼント 3月14日 16:00 ULTRASONEの開放型になった「Signature FUSION Open Back」、新発売日は3月22日 3月14日 15:00 今日みつけたお買い得品 「Apple Watch Ultra 2」が10万円切り。 天然木シート採用で4125円の「吸音ウォールパネル」 2025年2月12日 タンスのゲン、遮音材と吸音材を一体化した「防音材」 2022年3月17日 レビュー 膨らむ低音を退治せよ! 高速道路の防音から生まれたSHIZUKA Panel「LACOS」を試す 2024年2月8日 オーディオルームの低音を吸音、積み重ねもできる静科「SHIZUKA Stillness Panel LACOS」 2023年8月2日 Copyright ©2018Impress Corporation. All rights reserved.。

グーグル、「Gemini」の「Deep Research」と「Gems」を無料提供へ
30
ZDNET Japan 記事を毎朝メールでまとめ読み(登録無料) 一大イベント「Windows 10サポート終了」にどう対応すべきか 「macOS」をもっと使いこなす–覚えておきたいターミナルコマンド5選 「Windows 11 24H2」のインストールを避けるべき8つの理由–多数の不具合が発覚 「M3 MacBook Air」と「M2 MacBook Air」–購入前に知っておくべき違い 標的型攻撃メール訓練の効果指標、「開封率だけでは不十分」な理由を解説 マンガで解説!情シスが悩む「Microsoft 365/Copilot」の有効活用に役立つ支援策 大規模システム開発の炎上を回避せよ!失敗しないプロジェクトマネジメントの実践方法を解説 SASE導入後に気づいた課題、CASB/SWGの有効活用で実現する「クラウドセキュリティ」強化策 「AIエージェントによる顧客サポート」など10選、セールスフォースが示す最新のデータ活用法 「100人100通りの働き方」を目指すサイボウズが、従業員選択制のもとでMacを導入する真の価値 「脱VPN」で実現するゼロトラストセキュリティ!VPNの課題を解消し、安全なリモートアクセスを確立 最新調査が示すセキュリティ対策の「盲点」とは?|ゼロトラスト、生成AI、サプライチェーンリスクの実態 警察把握分だけで年間4000件発生、IPA10大脅威の常連「標的型攻撃」を正しく知る用語集 もはや安全ではないVPN–最新動向に見る「中小企業がランサムウェア被害に遭いやすい」理由 所属する組織のデータ活用状況はどの段階にありますか? 投票にはAsahi Interactive IDの登録・ログインが必要です グーグル、「Android」向けに「Linux」ターミナルを追加–まずは「Pixel」で導入 「Ubuntu」と「Debian」の7つの主な違い–どんなユーザーに最適なのか グーグル、「Gemini」の「Deep Research」と「Gems」を無料提供へ 「Windows 10」のサポート終了迫る–古いPCを引き続き使用する5つの方法 「Google Chrome」で多数の拡張機能が利用不能に–新仕様「Manifest V3」の影響を考察 エンタープライズコンピューティングの最前線を配信 ZDNET Japanは、CIOとITマネージャーを対象に、ビジネス課題の解決とITを活用した新たな価値創造を支援します。 その一環としてGoogleは米国時間3月13日、「Deep Research」と「Gems」という、とりわけ人気がある2つの機能について、すべてのユーザーに提供を開始すると発表した。しかも、これだけのアップグレードにもかかわらず、新しいプロンプトバーかモデル選択リストからDeep Researchを選択するだけで、今後は誰もが無料で試せるようになると、Googleはブログ投稿で説明している。

OSCHINA、スラドと OSDN の受け入れ先募集を打ち切ってサービス終了へ
133
モッドはしたくないからやりたい人がいたら明け渡す(渡す前にある程度ポストやコメントしてもらって変な人じゃない事は確認したいけど)https://www.reddit.com/r/sradot/ 来週からスラッシュドットジャパンが始まるのかな(超適当 フラド、フラッシュドットジャパン、スラッシュチョットジャパンみたいな(中華的な?)怪しいサイトが始まるかも(超々適当# あ、今の状況なら、クラッシュドットジャパンか 新しくするなら、今度はストレージ使用量が常にトップ画面に表示される仕様にするとナイスだと思う。> 中国特有のこのプライド意識によって… 最後まで定番の不思議妄想開陳 やつをありがとうこの景色も見納め! 地球か・・・ 何もかもが、みな懐かしい・・・ ところで最後だからこっそり教えるけど、実はソレ、ここで同意や称賛を受けていたわけじゃなかったんだ (きょうがくのじじつ) 誰かredditでスラドsubredditでもやりまへんか? > 誰かredditでスラドsubredditでもやりまへんか? 5chでもよくね?「スラド雑談用ストーリー [5]」とか作れば晩年とほぼ同じ運用になる。 ストーリーの最初の方に>無償で取得って書いてあるじゃんOSCHINAにはタダで上げたんでしょOSDN社から取得したのは有償だったんだろうけど、ソフトウェア業界への貢献とか考えてたんじゃないの、当時は > OSCHINAにはタダで上げたんでしょ つまり「どうせサービスを終了するなら格安で売ってよ」案件だったんですかね。

医療機関等はサイバー攻撃に備え「適切なパスワード設定、管理」「USB接続制限」「2要素認証」等確認を—―医療等情報利活用ワーキング(2) | GemMed | データが拓く新時代医療
9
緊急時用の柔軟な仕組みとその厳格な運用ルール設定なども検討しておくべき(高倉弘喜構成員:国立情報学研究所ストラテジックサイバーレジリエンス研究開発センター長)▼小規模医療機関向けに「どのような規定を整備すればよいのか」を具体的に示すなどの支援も行ってほしい(小野寺哲夫構成員:日本歯科医師会常務理事)▼今やネット通販などでも2要素認証は一般的に行われるようになってきており、医療機関等でも導入を急いでほしい(近藤則子構成員:老テク研究会事務局長)▼医療機関とベンダーとの情報システム契約内容によって「サイバーセキュリティ対策」費用が新たに発生するのか、などが変わってくると思われる。
▼2要素認証
以下のいずれか2要素を用いてアクセス権限の認証を行う
・ID・パスワードの組み合わせのような「利用者の記憶」によるもの
・指紋や静脈、虹彩のような利用者の生体的特徴を利用した「生体情報」によるもの
・ICカードのような「物理媒体」によるもの

【規程類の整備】
▽医療情報システムの安全管理が適切に行われるためには、組織内において「明文化されたルール」を定めた運用管理規程の整備が重要である(明文化ルールがない場合、情報セキュリティ担当者が異動した場合などに安全性確保が難しくなる)

(対策方針)
▽サイバーセキュリティ対策チェックリストにおいても「運用管理規程の整備状況」についての項目を追加する

●2025年度のサイバーセキュリティチェックリスト案はこちら(後述のように退職者等アカウントの管理について一部修正される見込み)
●2025年度版の医療機関等におけるサイバーセキュリティ対策チェックリストマニュアル(案)はこちらとこちら(2024年度版からの修正点を見え消しにしている版)
2025年度版のサイバーセキュリティ対策チェックリスト案(医療等情報利活用ワーキング(2)1 250313)

こうしたサイバーセキュリティチェックリストの充実方針に異論・反論は出ておらず、厚労省は「2025年度の立入検査」(2025年5,6月頃に実施される見込み)に向けてチェックリストの整備・公表を近く行います。また医療機関単独での対応が困難なこともあり、国からベンダーに対し「医療機関への協力」をしっかり要請してほしい(長島構成員)▼検査を行う保健所職員への研修などを十分に行ってほしい(山口育子構成員:ささえあい医療人権センターCOML理事長)▼2要素認証の導入は非常に重要だが、医療機関等は「生命を守る」ための緊急対応が必要な場面も出てこよう。


AI時代の仕事術(10方式)
553

そして思考実験だけでは不明確な点がある場合や、実際に実行することで思考するだけでは得られない大きな価値が得られると想定された場合に、実行へと移ります 再掲ですが、仕事のおいては「Issue(≒課題・大タスク)」に対して、「〇〇を目的に、△△という制約条件の下で◎◎を達成させてください」といった、精緻な言語化を伴う問題定義を自らに課します。 (1)ダラダラと仕事に取り組んでしまう
(2)締切が外部から突然与えられて、焦って実行する
(3)そして最悪なケースが、 「その仕事の芯を食っておらず、本質的には重要ではない、やってもやらなくても良いような仕事(タスク)まで実行してしまう」、すなわち法則通り「仕事の内容量が増える現象」が発生するケースです この重要でない仕事の膨張を避けるために、自ら設定する期限は、これでは短すぎではないかと感じるくらいのが望ましいです。 「思考実験優先方式」 とは、「この解決策を実施すると、何がどのように変化し、最終的にどんな結果が出そうで、そこから得られる成果は何で、どう嬉しい状態になるのか、そして言えること=結論は何になりそうか?」を、実際に手を動かして実行にする前に、脳内で細かくシミュレーションする方式です。


[DATAで見るケータイ業界] 基地局ベンダーの現在地
14
検索 閉じる ケータイ Watch をフォローする Special Site アクセスランキング 最新記事 Google、セキュリティ意識向上などを目的に「Japan Cybersecurity Initiative」を設立 3月15日 09:00 連載DATAで見るケータイ業界 基地局ベンダーの現在地 3月15日 07:00 レビュー 「Xiaomi 15 Ultra」クイックフォトレビュー 3月15日 06:00 NTT法廃止は見送り「3年後を目処に再検討」、村上大臣が会見 サイトブロッキングは「慎重な検討が必要」と説明 3月15日 06:00 ワイモバイル、追加契約で「iPhone SE(第3世代)」を4万4640円に割引 3月14日 20:14 Geminiの「Deep Research」が一部無料開放、ユーザーの趣味趣向に沿った回答生成も 3月14日 20:14 シニアの「足の健康」を支持、楽天モバイルと科研製薬が提携 3月14日 19:27 au PAY マーケット、最大37%還元の「ポイント交換所 大還元祭」を開催 15日~21日まで 3月14日 15:35 ちいかわのスマホアプリ「ちいかわぽけっと」、27日に登場 3月14日 15:34 【セール情報】本日のPICK UP 「Apple Watch SE 40mm(第2世代)」がAmazonでセール 7%割引 3月14日 13:40 【セール情報】本日のPICK UP Amazonで「AirPods 4」、「AirPods Pro 2」などが最大で9%引き Amazonセール 3月14日 13:06 「Galaxy S25」シリーズ、AmazonやヨドバシカメラもSIMフリー版を取扱 3月14日 13:05 【セール情報】本日のPICK UP 「iPad mini(第6世代)」Wi-Fi+CellularモデルがAmazonでセール 3月14日 12:57 開発中の「Android 16」、3番目のベータ版が登場 3月14日 12:41 「Galaxy S25 Ultra」が最大1.4万円相当おトク、S24シリーズが20%割引など、サムスンがキャンペーン 3月14日 12:40 Impress Watchシリーズ 人気記事 おすすめ記事 初心者の方におくる、スマートウォッチを選ぶときに確認したい10のこと グーグル「Pixel 9 Pro」レビュー 「Pixel 9 Pro XL」とどちらを選ぶべき? iPhone 16シリーズ、買うならどのモデル? 各モデルの違いから選び方を考える DATAで見るケータイ業界 MCA 2025年3月15日 07:00 最近、基地局ベンダーの動きが活性化している。 https://www.mca.co.jp/company/analyst/analystinfo/ ▲ 約10年のスパンで観る携帯市場の累積シェアトレンド 連載石川温の「スマホ業界 Watch」 「AI-RANで儲かる基地局に」、NVIDIAジェンスン・フアンCEOのアイデアから生まれるソフトバンクの未来 2024年11月19日 京セラ、AIを活用した5G仮想化基地局を開発 O-RAN普及を目指すアライアンスも設立 2025年2月18日 イベントMWC Barcelona 2025 エリクソン、日本にマッチする新型基地局無線機や「ネットワークAPI」のモデルケース 2025年3月7日 連載DATAで見るケータイ業界 通信機器ベンダー各社の動向と今後 2024年12月17日 連載DATAで見るケータイ業界 通信技術の世代交代時に起きる携帯各社の競争状況 2024年11月2日 連載DATAで見るケータイ業界 2022年度は基地局工事の65%が全国系大手エンジニアリング会社3社に 2024年5月6日 連載DATAで見るケータイ業界 国内基地局ベンダ市場で45%のシェアを獲得した北欧ベンダ、大幅にシェアを落とした国内ベンダ 2024年4月27日 トップページに戻る Copyright ©2018Impress Corporation. All rights reserved.。 NECも3月に5G対応の仮想化基地局(vRAN)向けソフトウェアの開発・商用化を発表し、2026年度までに国内外のキャリアへ5万局以上の展開を目指す。

積ん読にさよならしたいのでNotebookLMで物理本をスマートに管理する
60
技術書の入手先:
技術書を中心に多くの書籍をPDF形式で販売している出版社があるので、そちらで欲しい本が利用できる場合は検討することをおすすめします(例:SEShop、Gihyo、オライリーなど) 電子書籍といえば…で出てくるAmazonが提供するKindleも便利です(僕もたくさんKindleで本を買っています)が、NotebookLMとの連携をさせたい!となった場合には基本的にDRMで守られていたりするので注意が必要です。(これを実現するためには、1ノートブックに登録できるソース数上限に引っかからないようにNotebookLM Plusを契約していることが必要な場合があります) 物理本の目次部分だけでもNotebookLMに取り込んでおくことで、NotebookLMならではの様々な機能が利用できるようになり、読書体験が楽で楽しいものになります。 また、ここまでは物理本をどう管理するか?というところにフォーカスしてきましたが、ぶっちゃけたところNotebookLMの機能を最大限に活用するなら、PDF形式で購入できる書籍の方がおすすめだったりはします 🙏 PDF ならではの利点:
PDFであれば、テキストデータが直接NotebookLMに取り込まれるため、テキスト検索や質問応答機能がフルに活用できます。

https://magic-x-alignment-chart.vercel.app/
7

メディアのライター装い、公開前の情報入手狙うなりすましメールに注意 イード
11
「RBB TODAY」「INSIDE」などのWebメデイアを運営するイードは、同社が運営するメディアのライターを装い、公開前の情報を入手しようとするなりすましメールが一部の企業に送られていることを確認しているとし、企業の担当者に注意を呼び掛けた。 Copyright © ITmedia, Inc. All Rights Reserved. 続きを読むには、コメントの利用規約に同意し「アイティメディアID」および「ITmedia NEWS アンカーデスクマガジン」の登録が必要です Special ITmediaはアイティメディア株式会社の登録商標です。 なりすましメールは、送信元のフリーメールアドレスに同社名のアルファベットが含ま、署名欄に、同社の過去の住所が記載されているなどの特徴があるという。

テスラ車オーナー、マスク氏の米政治進出に不満
9
しかし、アナリストたちは、欧州の極右政党を支持したり、SNSで陰謀論を拡散したりといったマスク氏の政治的な行動が、これまでリベラル層が中心だったテスラの市場を孤立させる可能性があると指摘する。「テスラというブランドに失望している」 昨年からブラックバーン氏の車のバンパーには、「彼(マスク氏)が狂っていると知る前にこれを買った」と書かれたステッカーが貼られている。【3月14日 AFP】米バージニア州の元弁護士、トム・ブラックバーン氏(73)は、テスラの電気自動車(EV)を持つことを心から誇りに思い、10年以上前に真っ赤な目立つ車両を購入した。

【Git】リポジトリをコンパクトにする – MarkdownとBullet Journal
36
以下参考までにrmで消去したfileの復元方法を記載するが、プロセスが有効な状態に限られるし、削除したのが.gitフォルダの場合はこの方法では無理だ(代わりに下記外部ツールの利用参照)。 programmingforever.hatenablog.com Git のリポジトリが大きくなると、非常に多くのcommit,tree,そしてblob(file)objectを保持する様になる。また元のリポジトリ削除のためにrm -rfコマンドを使うので作業の際はディレクトリの位置などに十分注意すること 昨今の流れからブランチ名をmasterからmainに変更したい場合は下記手順を実行する。

サイバーセキュリティに生きていると自分がどこにいるか分からなくなる|Nasotasy
17
自身の選択できる知識、スキル、サービス… 取れる対策から攻撃を守ることができたときほどこの楽しさはあるのかもしれない。その指標は、読んだ本の数?取得した資格の数?受講したセキュリティトレーニングの数?こなしたプロジェクトの回数…? この指標は実はサイバーセキュリティの仕事、キャリアにおいては明確に求められることはないかもしれない。悪いわけでは無いし、別にプログラマーやネットワークエンジニアといった界隈がそうである、そうではないと論じたいわけではないが、特にサイバーセキュリティにおけるキャリアはこういった学び続ける力、インプットし続け、調べ理解する力が求められているように感じる。

MCPに1mmだけ入門
84
https://norahsakal.com/blog/mcp-vs-api-model-context-protocol-explained/
https://zenn.dev/aimasaou/articles/96182d46ae6ad2
https://zenn.dev/tesla/articles/3d1ba14614f320 以下のお二人の善意によるご助言で理解が大きく進みました。
https://x.com/tesla0225/status/1900014379694981621
https://x.com/yukimasakiyu/status/1900005829400772771 AIとじゃれあった記録 バッジを受け取った著者にはZennから現金やAmazonギフトカードが還元されます。 例)Cursor / Claude Desktop 情報を取りに行くAIエージェント
MCPホストの命を受けて出勤
MCPサーバに話しかけて情報をもらう。

KDDIがiPhone向けに「RCS」を提供 大容量の画像など、OS問わず送受信OK まずはβ版のiOS 18.4で利用可能に
41
日本では、2018年にドコモ、KDDI、ソフトバンクの3社が共同でRCSを採用した「+メッセージ」というサービスを開始したが、利用できるのは+メッセージ対応キャリアのユーザー間に限られている。KDDIは、AndroidとiOS間で大容量の画像やデータをやりとりできるRCS(Rich Communication Services)を、β版のiOS 18.4を搭載したiPhone向けに提供している。さらに、β版のiOS 18.4を搭載したiPhoneでRCSをオンにすることで利用可能になる。

Zen | Unkey
23
A Minimalist HTTP Library for Go Written by Andreas Thomas Published on When we started migrating our API services from TypeScript to Go, we were looking for an HTTP framework that would provide a clean developer experience, offer precise control over middleware execution, and integrate seamlessly with OpenAPI for our SDK generation. After evaluating the popular frameworks in the Go ecosystem, we found that none quite matched our specific requirements. So, we did what engineers do: we built our own. Enter Zen, a lightweight HTTP framework built directly on top of Go's standard library. Our journey began with our TypeScript API using Hono, which offered a fantastic developer experience with Zod validations and first-class OpenAPI support. When migrating to Go, we faced several challenges with existing frameworks: Most frameworks enforce a rigid middleware execution pattern that didn't allow for our specific needs. The critical limitation we encountered was the inability to capture post-error-handling response details—a fundamental requirement not just for our internal monitoring but also for our customer-facing analytics dashboard. This last point is crucial for both debugging and customer visibility. We store these responses and make them available to our customers in our dashboard, allowing them to inspect exactly what their API clients received. When an error occurs, customers need to see the precise HTTP status code and response payload their systems encountered, not just that an error happened somewhere in the pipeline. While we could have potentially achieved this with existing frameworks, doing so would have required embedding error handling and response logging logic directly into every handler function. This would mean handlers couldn't simply return Go errors—they would need to know how to translate those errors into HTTP responses and also handle logging those responses. This approach would: Our goal was to keep handlers simple, allowing them to focus on business logic and return domain errors without worrying about HTTP status codes, response formatting, or logging.. By building Zen, we could ensure handlers remained clean and focused while still providing our customers with complete visibility into their API requests—including the exact error responses their systems encountered. While frameworks like huma.rocks offered OpenAPI generation from Go code, we preferred a schema-first approach. This approach gives us complete control over the spec quality and annotations. With our SDKs generated via Speakeasy from this spec, we need to set the bar high to let them deliver the best SDK possible. Many frameworks pull in dozens of dependencies, which adds maintenance, potential security risks and the possibility of supply chain attacks. We wanted something minimal that relied primarily on Go's standard library. Go's error model is simple, but translating errors into HTTP responses (especially RFC 7807 problem+json ones) requires special handling. Existing frameworks made it surprisingly difficult to map our domain errors to appropriate HTTP responses. Rather than forcing an existing framework to fit our needs, we decided to build Zen with three core principles in mind: Put simply, Zen is a thin wrapper around Go's standard library that makes common HTTP tasks more ergonomic while providing precise control over request handling. Zen consists of four primary components, each serving a specific purpose in the request lifecycle: The Session type encapsulates the HTTP request and response context, providing utility methods for common operations: Sessions are pooled and reused between requests to reduce memory allocations and GC pressure, a common performance concern in high-throughput API servers. The Route interface represents an HTTP endpoint with its method, path, and handler function. Routes can be decorated with middleware chains: At the core of Zen, middleware is just a function: But this simple definition makes it so powerful. Each middleware gets a handler and returns a wrapped handler – that's it. No complex interfaces or lifecycle hooks to learn. What's special about this approach is that it lets us control exactly when each piece of middleware runs. For example, our logging middleware captures the final status code and response body: To understand our error handling middleware, it's important to first know how we tag errors in our application. We use a custom fault package that enables adding metadata to errors, including tags that categorize the error type and separate internal details from user-facing messages. In our handlers or services, we can return tagged errors like this: The WithDesc function is crucial here – it maintains two separate messages: This separation lets us provide detailed context for troubleshooting while ensuring we never leak sensitive implementation details to users. Our error handling middleware then examines these tags to determine the appropriate HTTP response: The Server type manages HTTP server configuration, lifecycle, and route registration: The server handles graceful shutdown, goroutine management, and session pooling automatically. Unlike frameworks that generate OpenAPI specs from code, we take a schema-first approach. Our OpenAPI spec is hand-crafted for precision and then used to generate Go types and validation logic: Our validation package uses pb33f/libopenapi-validator which provides structural and semantic validation based on our OpenAPI spec. In an ideal world we wouldn't use a dependency for this, but it's way too much and too error prone to implement ourselves at this stage. Creating Zen has provided us with several key advantages: We now have granular control over middleware execution, allowing us to capture metrics, logs, and errors exactly as needed. The middleware is simple to understand and compose, making it easy to add new functionality or modify existing behavior. By taking a schema-first approach to OpenAPI, we maintain full control over our API contract while still getting Go type safety through generated types. This ensures consistency across our SDKs and reduces the likelihood of API-breaking changes. Zen relies almost entirely on the standard library, with only a few external dependencies for OpenAPI validation. This reduces our dependency footprint and makes the codebase easier to understand and maintain. Zen follows Go conventions and idioms, making it feel natural to Go developers. Handler functions receive a context as the first parameter and return an error, following common Go patterns. The Session methods for binding request bodies and query parameters into Go structs provide type safety without boilerplate. The error handling middleware gives structured, consistent error responses. Here's a complete handler from our rate-limiting API that shows how all these components work together: The handler is just a function that returns an error, making it easy to test and reason about. All the HTTP-specific logic (authentication, validation, error handling, response formatting) is handled by middleware or injected services. Zen's simple design makes testing very easy, even our CEO loves it. Because routes are just functions that accept a context and session and return an error, they're easy to unit test: We've built test utilities that make it easy to set up a test harness with database dependencies, register routes, and call them with typed requests and responses. Zen lives in our open source mono repo, so you can explore or even use it in your own projects. The full source code is available in our GitHub repository at github.com/unkeyed/unkey/tree/main/go/pkg/zen. While we built Zen specifically for our needs, we recognize that other teams might face similar challenges with Go HTTP frameworks. You're welcome to: While the Go ecosystem offers many excellent HTTP frameworks, sometimes the best solution is a custom one tailored to your specific needs. A thin layer on top of Go's standard library can provide significant ergonomic benefits without sacrificing control or performance. As our API continues to grow, the simplicity and extensibility of Zen will allow us to add new features and functionality without compromising on performance or developer experience. The best abstractions are those that solve real problems without introducing new ones, and by starting with Go's solid foundation and carefully adding only what we needed, we've created a framework that enables our team to build with confidence. Written by Andreas Thomas Published on Contents Suggested Building In Public Feb 27, 2025 Building complex UI queries in plain English with AI Jan 27, 2025 Approximating row counts Nov 25, 2024 150,000 requests per month. No CC required.。

最近、迷惑な営業電話が止まらなくなったので、iPhoneの「不明な発信者を消音」する設定を有効化して事なきを得た。"電話"がついに終わりを迎えた気がしてる
78
かつては国が電話というインフラを提供していたのと同じく、いち企業に過度に依存しない共通規格としての連絡手段のプロトコルが必要なのではとぼんやり思う これマジで快適 pic.x.com/9pXoP76TF2 x.com/knshtyk/status… 自分もつい最近同じ設定にして、登録してない番号からの着信が鳴らないようにしてる😗 x.com/knshtyk/status… 後で設定する x.com/knshtyk/status… iPhoneにこの機能が導入されてからすぐにオンにしてる。 x.com/knshtyk/status… 通話状態にすると受話側が通話料払わされるものがあるので、ソフバンの不審電話検出アプリと組み合わせて海外発信や国内でも警戒すべき番号、知らない電話番号は無慈悲に赤いボタン押して即切り着信拒否送りにしてるな x.com/knshtyk/status… 私も似たような使い方。 x.com/knshtyk/status… @knshtyk とりあえず0800には出ない @knshtyk これなら電話より電報がいいと思ってる @knshtyk 登録してない電話番号以外はすべて無視してる
留守電入ってれば確認するぐらいで
緊急の用事なら留守電ぐらい入れるだろうからな @knshtyk 同じ事思う。

Harden-Runner detection: tj-actions/changed-files action is compromised – StepSecurity
15
March 14, 2025 We are actively investigating a critical security incident involving the tj-actions/changed-files GitHub Action. While our investigation is ongoing, we want to alert users so they can take immediate corrective actions. We will keep this post updated as we learn more. ‍StepSecurity Harden-Runner detected this issue through anomaly detection when an unexpected endpoint appeared in the network traffic. Based on our analysis, the incident started around 9:00 AM March 14th, 2025 Pacific Time (PT) / 4:00 PM March 14th, 2025 UTC. StepSecurity has released a free secure drop-in replacement for this Action to help recover from the incident: step-security/changed-files. We highly recommend you replace all instances of tj-actions/changed-files with the StepSecurity secure alternatives. Update March 14, 2025 11:00 PM UTC: Most versions of tj-actions/changed-files are compromised. Update March 15, 2025 02:00 AM UTC: We have detected multiple public repositories have leaked secrets in build logs. As these build logs are public, anyone can steal these secrets. If you maintain any public repositories that use this Action, please review the recovery steps immediately. Update March 15, 2025 02:00 PM UTC: GitHub has removed the tj-actions/changed-files Action. GitHub Actions workflows can no longer use this Action. Update March 15, 2025 10:00 PM UTC: The tj-actions/changed-files repository has been restored back. None of the versions include the malicious exploit code anymore. Update March 16, 2025 6:00 AM UTC: We have received several questions from the community. To help recover from the incident and answer questions from the community, we are hosting an Office Hour on March 18, 2025 at 10:00 AM Pacific Time (PT). Add the event to your calendar. ‍The tj-actions/changed-files GitHub Action, which is currently used in over 23,000 repositories, has been compromised. In this attack, the attackers modified the action’s code and retroactively updated multiple version tags to reference the malicious commit. The compromised Action prints CI/CD secrets in GitHub Actions build logs. If the workflow logs are publicly accessible (such as in public repositories), anyone could potentially read these logs and obtain exposed secrets. There is no evidence that the leaked secrets were exfiltrated to any remote network destination. Our Harden-Runner solution flagged this issue when an unexpected endpoint appeared in the workflow’s network traffic. This anomaly was caught by Harden-Runner’s behavior-monitoring capability. The compromised Action now executes a malicious Python script that dumps CI/CD secrets from the Runner Worker process. Most of the existing Action release tags have been updated to refer to the malicious commit mentioned below (@stevebeattie notified us about this). Note: All these tags now point to the same malicious commit hash:0e58ed8671d6b60d0890c21b07f8835ace038e67, indicating the retroactive compromise of multiple versions.” @salolivares has identified the malicious commit that introduces the exploit code in the Action. https://github.com/tj-actions/changed-files/commit/0e58ed8671d6b60d0890c21b07f8835ace038e67 The base64 encoded string in the above screenshot contains the exploit code. Here is the base64 decoded version of the code.‍ ‍ Here is the content of https://gist.githubusercontent.com/nikitastupin/30e525b776c409e03c2d6f328f254965/raw/memdump.py ‍ Even though GitHub shows renovate as the commit author, most likely the commit did not actually come up renovate bot. The commit is an un-verified commit, so likely the adversary provided renovate as the commit author to hide their tracks. StepSecurity Harden-Runner secures CI/CD workflows by controlling network access and monitoring activities on GitHub-hosted and self-hosted runners. The name "Harden-Runner" comes from its purpose: strengthening the security of the runners used in GitHub Actions workflows. The Harden-Runner community tier is free for open-source projects. In addition, it offers several enterprise features. ‍ When this Action is executed with Harden-Runner, you can see the malicious code in action. We reproduced the exploit in a test repository. When the compromised tj-actions/changed-files action runs, Harden-Runner’s insights clearly show it downloading and executing a malicious Python script that attempts to dump sensitive data from the GitHub Actions runner’s memory. You can see the behavior here: https://app.stepsecurity.io/github/step-security/github-actions-goat/actions/runs/13866127357‍To reproduce this, you can run the following workflow: When this workflow is executed, you can see the malicious behavior through Harden-Runner: https://app.stepsecurity.io/github/step-security/github-actions-goat/actions/runs/13866127357 ‍ When this workflow runs, you can observe the malicious behavior in the Harden-Runner insights page. The compromised Action downloads and executes a malicious Python script, which attempts to dump sensitive data from the Actions Runner process memory. 🚨 If you are using any version of the tj-actions/changed-files Action, we strongly recommend you stop using it immediately until the incident is resolved. To support the community during this incident, we have released a free, secure, and drop-in replacement: step-security/changed-files. We recommend updating all instances of j-actions/changed-files in your workflows to this StepSecurity-maintained Action. To use the StepSecurity maintained Action, simply replace all instances of "tj-actions/changed-files@vx" with "step-security/changed-files@3dbe17c78367e7d60f00d78ae6781a35be47b4a1 # v45.0.1" or "step-security/changed-files@v45". For enhanced security, you can pin to the specific commit SHA: You can also reference the Action through its latest release tag: For more details, please refer to README of the project. ‍ You should perform a code search across your repositories to discover all instances of the tj-actions/changed-files Action. For example, the following GitHub search URL shows all instances of this Action in the Actions GitHub organization:https://github.com/search?q=org%3Aactions%20tj-actions%2Fchanged-files%20Action&type=code‍Please note that this GitHub search does not always return accurate results. If you have dedicated source code search solutions such as SourceGraph, they could be more effective with finding all instances of this Action in use. You should review logs for the recent executions of the Action and see if it has leaked secrets. Below is an example of how leaked secrets appear in build logs. ‍ This step is especially important for public repositories since their logs are publicly accessible. If you discover any secrets in GitHub Actions workflow run logs, rotate them immediately. The following steps are applicable only for StepSecurity enterprise customers. If you are not an existing enterprise customer, you can start our 14 day free trial by installing the StepSecurity GitHub App to complete the following recovery step. You can use the Actions inventory feature to discover all GitHub Actions workflows that are using tj-actions/changed-files. ‍‍ You can see if your workflows have called "gist.githubusercontent.com" by visiting "All Destinations" in your StepSecurity dashboard. If this endpoint appears in the list, review the workflow runs that called this endpoint. We offer secure drop-in replacements for risky third-party Actions as part of our enterprise tier. We are currently in the process of onboarding this Action as a StepSecurity Maintained Action. Once onboarded, our enterprise customers can use the StepSecurity Maintained version of tj-actions/changed-files instead of the compromised versions. We have reported this issue to GitHub and opened an issue in the affected repository:🔗 GitHub Issue #2463 The GitHub issue is no longer accessible as the repository has been deleted. An official CVE (CVE-2025-30066) has been published to track this incident. We will continue to monitor the situation and provide updates as more information becomes available. For real-time security monitoring and proactive anomaly detection in GitHub Actions workflows, consider using Harden-Runner to detect and mitigate such threats. ‍ We are investigating a critical security incident involving the popular tj-actions/changed-files GitHub Action. We want to alert you immediately so that you can take prompt action. This post will be updated as new information becomes available. Varun Sharma March 14, 2025 We’re excited to announce our integration with RunsOn, the modern way to self-host GitHub Actions runners at scale on AWS, with incredible cost savings and advanced features. With this partnership, StepSecurity Harden-Runner now seamlessly integrates with RunsOn, providing enhanced security and visibility for CI/CD pipelines. Varun Sharma February 27, 2025 The updates include support for pinning GitHub’s New Immutable Actions, exemptions for pinning specific GitHub Actions, and configuring preferences to use across multiple repositories. Ashish Kurmi February 25, 2025。

Anthropic API のトークン最適化によるコスト削減
40
https://docs.anthropic.com/en/docs/build-with-claude/tool-use/token-efficient-tool-use 要は、Tool Use 利用時にモデルとしてclaude-3-7-sonnet-20250219を指定、リクエストヘッダーにtoken-efficient-tools-2025-02-19を追加することで、出力トークン数で平均 14%、最大 70% 削減、レイテンシの改善ができるという代物。https://www.anthropic.com/news/token-saving-updates Anthropic API では 2024 年の 8 月から Prompt Caching が導入されています。 https://www.anthropic.com/news/prompt-caching Prompt Caching について詳しく知りたいという方はこちらの記事をご覧ください。

Highlights from Git 2.49
13
Learn about artificial intelligence and machine learning across the GitHub ecosystem and the wider industry. Learn how to build with generative AI. Change how you work with GitHub Copilot. Everything developers need to know about LLMs. Machine learning tips, tricks, and best practices. Explore the capabilities and benefits of AI code generation and how it can improve your developer experience. Resources for developers to grow in their skills and careers. Insights and best practices for building apps. Tips & tricks to grow as a professional developer. Improve how you use GitHub at work. Learn how to move into your first professional role. Stay current on what’s new (or new again). Learn how to start building, shipping, and maintaining software with GitHub. Get an inside look at how we’re building the home for all developers. Discover how we deliver a performant and highly available experience across the GitHub platform. Explore best practices for building software at scale with a majority remote team. Get a glimpse at the technology underlying the world’s leading AI-powered developer platform. Learn how we build security into everything we do across the developer lifecycle. Find out what goes into making GitHub the home for all developers. Our engineering and security teams do some incredible work. Let’s take a look at how we use GitHub to be more productive, build collaboratively, and shift security left. Explore how to write, build, and deploy enterprise software at scale. Automating your way to faster and more secure ships. Guides on continuous integration and delivery. Tips, tools, and tricks to improve developer collaboration. DevOps resources for enterprise engineering teams. How to integrate security into the SDLC. Ensuring your builds stay clean. Learn how to bring AI to your engineering teams and maximize the value that you get from it. Keep up with what’s new and notable from inside GitHub. An inside look at news and product updates from GitHub. The latest on GitHub’s platform, products, and tools. Insights into the state of open source on GitHub. The latest policy and regulatory changes in software. Data-driven insights around the developer ecosystem. Older news and updates from GitHub. Learn how to use retrieval-augmented generation (RAG) to capture more insights. Everything open source on GitHub. The latest Git updates. Spotlighting open source maintainers. How open source is driving positive change. Explore open source games on GitHub. Organizations worldwide are incorporating open source methodologies into the way they build and ship their own software. Stay up to date on everything security. Application security, explained. Demystifying supply chain security. Updates from the GitHub Security Lab. Helpful tips on securing web applications. Learn about core challenges in DevSecOps, and how you can start addressing them with AI and automation. Learn about artificial intelligence and machine learning across the GitHub ecosystem and the wider industry. Learn how to build with generative AI. Change how you work with GitHub Copilot. Everything developers need to know about LLMs. Machine learning tips, tricks, and best practices. Explore the capabilities and benefits of AI code generation and how it can improve your developer experience. Resources for developers to grow in their skills and careers. Insights and best practices for building apps. Tips & tricks to grow as a professional developer. Improve how you use GitHub at work. Learn how to move into your first professional role. Stay current on what’s new (or new again). Learn how to start building, shipping, and maintaining software with GitHub. Get an inside look at how we’re building the home for all developers. Discover how we deliver a performant and highly available experience across the GitHub platform. Explore best practices for building software at scale with a majority remote team. Get a glimpse at the technology underlying the world’s leading AI-powered developer platform. Learn how we build security into everything we do across the developer lifecycle. Find out what goes into making GitHub the home for all developers. Our engineering and security teams do some incredible work. Let’s take a look at how we use GitHub to be more productive, build collaboratively, and shift security left. Explore how to write, build, and deploy enterprise software at scale. Automating your way to faster and more secure ships. Guides on continuous integration and delivery. Tips, tools, and tricks to improve developer collaboration. DevOps resources for enterprise engineering teams. How to integrate security into the SDLC. Ensuring your builds stay clean. Learn how to bring AI to your engineering teams and maximize the value that you get from it. Keep up with what’s new and notable from inside GitHub. An inside look at news and product updates from GitHub. The latest on GitHub’s platform, products, and tools. Insights into the state of open source on GitHub. The latest policy and regulatory changes in software. Data-driven insights around the developer ecosystem. Older news and updates from GitHub. Learn how to use retrieval-augmented generation (RAG) to capture more insights. Everything open source on GitHub. The latest Git updates. Spotlighting open source maintainers. How open source is driving positive change. Explore open source games on GitHub. Organizations worldwide are incorporating open source methodologies into the way they build and ship their own software. Stay up to date on everything security. Application security, explained. Demystifying supply chain security. Updates from the GitHub Security Lab. Helpful tips on securing web applications. Learn about core challenges in DevSecOps, and how you can start addressing them with AI and automation. The open source Git project just released Git 2.49. Here is GitHub’s look at some of the most interesting features and changes introduced since last time. The open source Git project just released Git 2.49 with features and bug fixes from over 89 contributors, 24 of them new. We last caught up with you on the latest in Git back when 2.48 was released. To celebrate this most recent release, here is GitHub’s look at some of the most interesting features and changes introduced since last time. Many times over this series of blog posts, we have talked about Git’s object storage model, where objects can be written individually (known as “loose” objects), or grouped together in packfiles. Git uses packfiles in a wide variety of functions, including local storage (when you repack or GC your repository), as well as when sending data to or from another Git repository (like fetching, cloning, or pushing). Storing objects together in packfiles has a couple of benefits over storing them individually as loose. One obvious benefit is that object lookups can be performed much more quickly in pack storage. When looking up a loose object, Git has to make multiple system calls to find the object you’re looking for, open it, read it, and close it. These system calls can be made faster using the operating system’s block cache, but because objects are looked up by a SHA-1 (or SHA-256) of their contents, this pseudo-random access isn’t very cache-efficient. But most interesting to our discussion is that since loose objects are stored individually, we can only compress their contents in isolation, and can’t store objects as deltas of other similar objects that already exist in your repository. For example, say you’re making a series of small changes to a large blob in your repository. When those objects are initially written, they are each stored individually and zlib compressed. But if the majority of the file’s content remains unchanged among edit pairs, Git can further compress these objects by storing successive versions as deltas of earlier ones. Roughly speaking, this allows Git to store the changes made to an object (relative to some other object) instead of multiple copies of nearly identical blobs. But how does Git figure out which pairs of objects are good candidates to store as delta-base pairs? One useful proxy is to compare objects that appear at similar paths. Git does this today by computing what it calls a “name hash”, which is effectively a sortable numeric hash that weights more heavily towards the final 16 non-whitespace characters in a filepath (source). This function comes from Linus all the way back in 2006, and excels at grouping functions with similar extensions (all ending in .c, .h, etc.), or files that were moved from one directory to another (a/foo.txt to b/foo.txt). But the existing name-hash implementation can lead to poor compression when there are many files that have the same basename but very different contents, like having many CHANGELOG.md files for different subsystems stored together in your repository. Git 2.49 introduces a new variant of the hash function that takes more of the directory structure into account when computing its hash. Among other changes, each layer of the directory hierarchy gets its own hash, which is downshifted and then XORed into the overall hash. This creates a hash function which is more sensitive to the whole path, not just the final 16 characters. This can lead to significant improvements both in packing performance, but also in the resulting pack’s overall size. For instance, using the new hash function was able to improve the time it took to repack microsoft/fluentui from ~96 seconds to ~34 seconds, and slimming down the resulting pack’s size from 439 MiB to just 160 MiB (source). While this feature isn’t (yet) compatible with Git’s reachability bitmaps feature, you can try it out for yourself using either git repack’s or git pack-objects’s new –name-hash-version flag via the latest release. [source] Have you ever been working in a partial clone and gotten this unfriendly output? What happened here? To understand the answer to that question, let’s work through an example scenario: Suppose that you are working in a partial clone that you cloned with –filter=blob:none. In this case, your repository is going to have all of its trees, commit, and annotated tag objects, but only the set of blobs which are immediately reachable from HEAD. Put otherwise, your local clone only has the set of blobs it needs to populate a full checkout at the latest revision, and loading any historical blobs will fault in any missing objects from wherever you cloned your repository. In the above example, we asked for a blame of the file at path README.md. In order to construct that blame, however, we need to see every historical version of the file in order to compute the diff at each layer to figure out whether or not a revision modified a given line. But here we see Git loading in each historical version of the object one by one, leading to bloated storage and poor performance. Git 2.49 introduces a new tool, git backfill, which can fault in any missing historical blobs from a –filter=blob:none clone in a small number of batches. These requests use the new path-walk API (also introduced in Git 2.49) to group together objects that appear at the same path, resulting in much better delta compression in the packfile(s) sent back from the server. Since these requests are sent in batches instead of one-by-one, we can easily backfill all missing blobs in only a few packs instead of one pack per blob. After running git backfill in the above example, our experience looks more like: But running git backfill immediately after cloning a repository with –filter=blob:none doesn’t bring much benefit, since it would have been more convenient to simply clone the repository without an object filter enabled in the first place. When using the backfill command’s –sparse option (the default whenever the sparse checkout feature is enabled in your repository), Git will only download blobs that appear within your sparse checkout, avoiding objects that you wouldn’t checkout anyway. To try it out, run git backfill in any –filter=blob:none clone of a repository using Git 2.49 today! [source, source] The zlib-ng fork merges many of the optimizations made above, as well as removes dead code and workarounds for historical compilers from upstream zlib, placing a further emphasis on performance. For instance, zlib-ng has support for SIMD instruction sets (like SSE2, and AVX2) built-in to its core algorithms. Though zlib-ng is a drop-in replacement for zlib, the Git project needed to update its compatibility layer to accommodate zlib-ng. In Git 2.49, you can now build Git with zlib-ng by passing ZLIB_NG when building with the GNU Make, or the zlib_backend option when building with Meson. Early experimental results show a ~25% speed-up when printing the contents of all objects in the Git repository (from ~52.1 seconds down to ~40.3 seconds). [source] This release marks a major milestone in the Git project with the first pieces of Rust code being checked in. Specifically, this release introduces two Rust crates: libgit-sys, and libgit which are low- and high-level wrappers around a small portion of Git’s library code, respectively. The Git project has long been evolving its code to be more library-oriented, doing things like replacing functions that exit the program with ones that return an integer and let the caller decide to exit or, cleaning up memory leaks, etc. This release takes advantage of that work to provide a proof-of-concept Rust crate that wraps part of Git’s config.h API. This isn’t a fully-featured wrapper around Git’s entire library interface, and there is still much more work to be done throughout the project before that can become a reality, but this is a very exciting step along the way. [source] Speaking of the “libification” effort, there were a handful of other related changes that went into this release. The ongoing effort to move away from global variables like the_repository continues, and many more commands in this release use the provided repository instead of using the global one. This release also saw a lot of effort being put into squelching -Wsign-compare warnings, which occur when a signed value is compared against an unsigned one. This can lead to surprising behavior when comparing, say, negative signed values against unsigned ones, where a comparison like -1 < 2 (which should return true) ends up returning false instead. Hopefully you won’t notice these changes in your day-to-day use of Git, but they are important steps along the way to bringing the project closer to being able to be used as a standalone library. [source, source, source, source, source] Long-time readers might remember our coverage of Git 2.39 where we discussed git repack’s new –expire-to option. In case you’re new around here or could use a refresher, we’ve got you covered. The –expire-to option in git repack controls the behavior of unreachable objects which were pruned out of the repository. By default, pruned objects are simply deleted, but –expire-to allows you to move them off to the side in case you want to hold onto them for backup purposes, etc. git repack is a fairly low-level command though, and most users will likely interact with Git’s garbage collection feature through git gc. In large part, git gc is a wrapper around functionality that is implemented in git repack, but up until this release, git gc didn’t expose its own command-line option to use –expire-to. That changed in Git 2.49, where you can now experiment with this behavior via git gc –expire-to! [source] You may have read that Git’s help.autocorrect feature is too fast for Formula One drivers. In case you haven’t, here are the details. If you’ve ever seen output like: …then you have used Git’s autocorrect feature. But its configuration options don’t quite match the convention of other, similar options. For instance, in other parts of Git, specifying values like “true”, “yes”, “on”, or “1” for boolean-valued settings all meant the same thing. But help.autocorrect deviates from that trend slightly: it has special meanings for “never”, “immediate”, and “prompt”, but interprets a numeric value to mean that Git should automatically run whatever command it suggests after waiting that many deciseconds. So while you might have thought that setting help.autocorrect to “1” would enable the autocorrect behavior, you’d be wrong: it will instead run the corrected command before you can even blink your eyes1. Git 2.49 changes the convention of help.autocorrect to interpret “1” like other boolean-valued commands, and positive numbers greater than 1 as it would have before. While you can’t specify that you want the autocorrect behavior in exactly 1 decisecond anymore, you probably never meant to anyway. [source, source] You might be aware of git clone’s various options like –branch or –tag. When given, these options allow you to clone a repository’s history leading up to a specific branch or tag instead of the whole thing. These options are often used in CI farms when they want to clone a specific branch or tag for testing. But what if you want to clone a specific revision that isn’t at any branches or tags in your repository, what do you do? Prior to Git 2.49, the only thing you could do is initialize an empty repository and fetch a specific revision after adding the repository you’re fetching from as a remote. Git 2.49 introduces a much more convenient method to round out the –branch and –tag options by adding a new –revision option that fetches history leading up to the specified revision, regardless of whether or not there is a branch or tag pointing at it. [source] Speaking of remotes, you might know that the git remote command uses your repository’s configuration to store the list of remotes that it knows about. You might not know that there were actually two different mechanisms which preceded storing remotes in configuration files. In the very early days, remotes were configured via separate files in $GIT_DIR/branches (source). A couple of weeks later, the convention changed to use $GIT_DIR/remote instead of the /branches directory (source). Both conventions have long since been deprecated and replaced with the configuration-based mechanism we’re familiar with today (source, source). But Git has maintained support for them over the years as part of its backwards compatibility. When Git 3.0 is eventually released, these features will be removed entirely. If you want to learn more about Git’s upcoming breaking changes, you can read all about them in Documentation/BreakingChanges.adoc. If you really want to live on the bleeding edge, you can build Git with the WITH_BREAKING_CHANGES compile time switch, which compiles out features that will be removed in Git 3.0. [source, source] Last but not least, the Git project had two wonderful Outreachy interns that recently completed their projects! Usman Akinyemi worked on adding support to include uname information in Git’s user agent when making HTTP requests, and Seyi Kuforiji worked on converting more unit tests to use the Clar testing framework. You can learn more about their projects here and here. Congratulations, Usman and Seyi! [source, source, source, source] That’s just a sample of changes from the latest release. For more, check out the release notes for 2.49, or any previous version in the Git repository. @ttaylorr Taylor Blau is a Staff Software Engineer at GitHub where he works on Git. The open source Git project just released Git 2.48. Here is GitHub’s look at some of the most interesting features and changes introduced since last time. Want to know how to take your terminal skills to the next level? Whether you’re starting out, or looking for more advanced commands, GitHub Copilot can help us explain and suggest the commands we are looking for. Three maintainers talk about how they fostered their open source communities. GitHub’s Digital Public Goods Open Source Community Manager Program just wrapped up a second successful year, helping Community Managers gain experience in using open source for good. Show your appreciation to the open source projects you love. You can help provide much-needed support to the critical but often underfunded projects that keep your infrastructure running smoothly. And remember—every day is a perfect day to support open source! 💖 Everything you need to master GitHub, all in one place. Build what’s next on GitHub, the place for anyone from anywhere to build anything. Meet the companies and engineering teams that build with GitHub. Check out our current job openings. Discover tips, technical guides, and best practices in our biweekly newsletter just for devs.。

「頭痛」の言葉やたばこ吸う行為は尾行の信号、南北間で隠語・暗号飛ぶ情報戦…ユーチューブでもやりとり
17
1 2 3 4 5 日高屋【東京】 890円 → 860円 焼肉ホルモンゆうじ ランチ5%OFF ディナー10%OFF 財宝 4,870円 → 初めての方限定特別価格1,980円(59%OFF) tabiwaトラベル【大阪】 お1人様につき(おとな・こども共)2,000円引 【先着200名様】 じゅうじゅうカルビ【京都】 お会計より10%OFF 読売IDのご登録でもっと便利に 「発言小町」は、読売新聞が運営する女性向け掲示板で、女性のホンネが分かる「ネット版井戸端会議」の場です。完了しました 韓国の水原地裁で昨年11月、北朝鮮の工作機関の指示で韓国内でスパイ活動を行ったなどとして、韓国のスパイ組織の男ら3人に実刑判決が言い渡された。 支社長 10時に手に持っていた水のペットボトルを開けて飲む動作を実行 本社メンバー その信号動作を確認した後、7~8メートルの距離でサングラスを2、3回ハンカチで拭く動作を実行 指令文にはこう書かれていた。

「マイナ免許証読み取りアプリ」公開 運転免許情報をスマホ・PCで確認
28
Impress Watch をフォローする Special Site 最新記事 高速道路、3連休の休日割引全撤廃 3月15日 10:00 動くホテル? ジャンボフェリーが「ナゾの宿泊プラン」を始めた理由 3月15日 09:15 from Impress NISA「成長投資枠」活用3ステップ “スゴイ株”と“悪い株”を見極める 3月15日 09:00 アシックス、歩行時のエネルギーロスを抑えた「ペダラ ライドウォーク 2」 3月15日 08:30 「マイナ免許証読み取りアプリ」公開 運転免許情報をスマホ・PCで確認 3月14日 20:01 ヨドバシの新業態、日本酒テーマパーク「Yodobloom SAKE」 3月14日 19:27 代々木公園新エリアに商業・交流施設「BE STAGE」 飲食店やニューバランス 3月14日 19:00 Lime、品川・大田区でサービス開始 都内17区へ拡大 3月14日 18:48 LIXIL、”威嚇”する防犯カメラ 警報サイレンとライト搭載 3月14日 16:01 iPhone搭載マイナカードはどう使われるのか? デジ庁がコンビニ活用テスト 3月14日 15:24 カードのタッチ決済が1日券 江ノ電ではじまる「Pass Case」で乗車+街の活性化 3月14日 13:51 シャープが宅配クリーニング 「プラズマクラスター」ルームで保管 3月14日 13:45 ミズノの寝具、マットレス「リフルSL550」 体圧分散・洗える・軽い 3月14日 12:56 鉄道インフラ点検にドローン活用 JR各社とスタートアップ3社が協定 3月14日 12:36 吉野家、ダチョウ肉の「オーストリッチ丼」を自宅で楽しめる冷凍食品 3月14日 12:14 アクセスランキング Impress Watchシリーズ 人気記事 おすすめ記事 「足立区のおいしい給食」 効果絶大で食べ残し7割減 モバイル免許証を見据えた「クルマウォレット連携」にみる未来の潮流 新MacBook AirとMac Studioの登場 性能アップから見える「生成AI」の時代 Skypeついにサービス終了 その歴史と「Teams」の課題 Androidはどこへいくのか AIをUIに組み込み、OSアップデート加速 ニュース 臼田勤哉 2025年3月14日 20:01 警察庁は12日、マイナンバーカードと運転免許証・運転経歴証明書を一体化した「マイナ免許証」の免許証関連データを読み取れる「マイナ免許証読み取りアプリ」を提供開始した。 運転免許証120年の歴史とマイナ免許証が誕生するまで 2025年3月4日 鈴木淳也のPay Attention マイナ保険証とマイナ免許証の実際 2024年11月18日 マイナ免許証、2025年3月24日開始 2024年9月12日 トップページに戻る Copyright ©2018Impress Corporation. All rights reserved.。 2025年3月24日から、マイナ免許証の運用が開始される。

Semgrep | 🚨 Popular GitHub Action tj-actions/changed-files is compromised
21
We're excited to announce $100M in Series D funding, led by Menlo Ventures. Discover Our CEO’s Vision Find and fix the issues that matter in your code (SAST) Find and fix reachable dependency vulnerabilities (SCA) Find and fix hardcoded secrets with semantic analysis Get triage and code fix recommendations from AI Automate, manage, and enforce security across your organization Find more true positives and fewer false positives with dataflow analysis Find rules written by Semgrep and the community Write and share rules using our online interactive tool Stay up to date on changes to the Semgrep platform, big and small Mitigate software supply chain risks Increase security while accelerating development Prevent the most critical web application security risks Protect Your Code with Secure Guardrails Want the docs? Start here Get the latest news about Semgrep See how Semgrep can save you time and money Join the friendly Slack group to ask questions or share feedback Join us at a Semgrep Event! See why users love Semgrep Get help from Semgrep’s Customer Success team View our library of on-demand webinars Learn how Semgrep improves accuracy, saves time, and delivers a superior developer experience. The Semgrep story & values Join the team! Become a Semgrep partner Find and fix the issues that matter in your code (SAST) Find and fix reachable dependency vulnerabilities (SCA) Find and fix hardcoded secrets with semantic analysis Get triage and code fix recommendations from AI Automate, manage, and enforce security across your organization Find more true positives and fewer false positives with dataflow analysis Stay up to date on changes to the Semgrep platform, big and small Mitigate software supply chain risks Increase security while accelerating development Prevent the most critical web application security risks Want to read all the docs? Start here Get the latest news about Semgrep See how Semgrep can save you time and money Join the friendly Slack group to ask questions or share feedback Join us at a Semgrep Event! See why users love Semgrep View our library of on-demand webinars The Semgrep story & values Join the team! Become a Semgrep partner Popular GitHub Action tj-actions/changed-files has been compromised with a payload that appears to attempt to dump secrets, impacting thousands of CI pipelines. Popular GitHub Action tj-actions/changed-files has been compromised (GitHub issue) with a payload that appears to attempt to dump secrets, impacting thousands of CI pipelines. This isn’t the first security issue with tj-actions/changed-files—see prior vulnerability CVE-2023-51664. Find out where you're affected The simplest way to find this is to grep for tj-actions in your codebase. If you're on GitHub, look at the results of this query, replacing YOURORG with your organization's name on GitHub: https://github.com/search?q=org%3A<YOURORG>+uses%3A+tj-actions%2F&type=code Arguably, Semgrep is overkill for this case. But Lewis Ardern on our team wrote a Semgrep rule to find usages of tj-actions, which you can run locally (without sending code to the cloud) via: semgrep –config r/10Uz5qo/semgrep.tj-actions-compromised. And if we find more information about what tags & commits are affected, we can update the rule over time to become more precise about whether or not you could be impacted. At time of writing, it looks like all versions are compromised. For users of Semgrep AppSec Platform, we recommend placing the detection rule in blocking mode immediately: visit the rule, click “add to policy”, and select “blocking mode.” Stop using tj-actions/changed-files immediately. Switch to a safer alternative or inline your file-change detection logic. Just removing it from the main branch of your repository won’t be enough — it could still run on other branches depending on how your actions are configured. So you need to remove it from all branches to be safe. As an alternative, GitHub has a feature that lets you allow-list GitHub actions so you can ensure it won’t run, even if it’s still in your code. You’ll need a list of GitHub Actions used at your org. Run this query on your codebase: Remove tj-actions/changed-files from the list of GitHub Actions. Go to GitHub settings and configure like this at:https://github.com/semgrep/semgrep-app/settings/actions Generally, pin all GitHub Actions to specific commit SHAs (rather than version tags) you know are safe. In this case, it appears that all versions are compromised. Audit past workflow runs for signs of compromise. Check logs for suspicious outbound network requests. Prioritize repos where your CI runner logs are public, as secrets are dumped to stdout in the payload. At time of writing (2025-03-14T23:55:00Z), we assessed by inspecting tag pointers in the source repo that all versions of tj-actions/changed-files are compromised. Users may verify with git tag –points-at 0e58ed8 . See commit 0e58ed8 in https://github.com/tj-actions/changed-files. StepSecurity’s Incident Analysis https://github.com/tj-actions/changed-files/issues/2463 CVE-2023-51664 Semgrep lets security teams partner with developers and shift left organically, without introducing friction. Semgrep gives security teams confidence that they are only surfacing true, actionable issues to developers, and makes it easy for developers to fix these issues in their existing environments.。

MCPサーバーが切り拓く!自社サービス運用の新次元 – エムスリーテックブログ
280
サービスの機能をMCP経由で使うこと自体にもメリットはありそうですが、個人的にはサービス運用やカスタマーサポートにこそ真価を発揮するのでは?と思い、MCPサーバーによる業務効率化の可能性として、以下のような活用方法を検討しています。 エンジニアリングで事業を革新したい仲間を募集中です!
技術を駆使して、新たなサービスの可能性を一緒に切り拓きませんか? jobs.m3.com open.talentio.com *1:現在AskDoctorsにMCPサーバーとしての機能や外部から検索を可能にするAPIなどの機能はありません。 社内システムにおけるMCPの活用可能性を探るため、遠隔健康医療相談サービス「AskDoctors」のQ&A検索機能をMCPサーバーとして実装する検証を行いました。

「UIも自動化も後回し」: AIエージェント開発の実践的アプローチ – Algomatic Tech Blog
145
なぜなら、AIエージェント開発では人間の思考を再現しワークフローに落とし込むわけですが、日々のオペレーションを経てワークフローの解像度やLLMのアウトプットへの理解が高まることで「もっとこうすべき」「こうしないと精度が担保できない」などの変更が発生するからです。 AIエージェント開発、あるいはLLMを使ったプロダクト開発が面白そうだな〜と思ったエンジニアの方、ぜひお話ししましょう! jobs.algomatic.jp capybara-algomatic
2025-03-14 18:37 Algomaticは、「AI革命で人々を幸せにする」をミッションに、「時代を代表する事業群を創る」ことをビジョンに掲げています。AIの特性上、「いい感じにアウトプットして!」みたいなオーダーでは良い結果を得ることは難しく、期待するアウトプットをいかに言語化してAIに指示できるかが肝です。

Apple Intelligenceがクソ過ぎてiPhone 17の販売にも影響を与える可能性 – こぼねみ
24
Apple公式サイト【PR】
ドコモオンラインショップ【PR】
ソフトバンク【PR】
ワイモバイル【PR】
au Online Shop【PR】
楽天モバイル【PR】 ヤマダウェブコム【PR】
ビックカメラ.com【PR】
Amazon.co.jp【PR】 ちょびリッチ(登録で2000円分相当もらえる)【PR】
ライフメディア(登録で最大2500円相当もらえる)【PR】
楽天Rebates(登録で最大2000円相当もらえる)【PR】
モッピー(登録で最大2000円相当もらえる)【PR】
ハピタス(登録で最大1000円相当もらえる)【PR】
ECナビ(登録で最大1000円相当もらえる)【PR】
ポイントタウン(登録で最大2000円相当もらえる)【PR】
ポイントインカム(登録で最大2000円相当もらえる)【PR】
Gポイント【PR】 kobonemi
2025-03-14 10:02 現在の噂や新情報は、iPhone 17シリーズ、Apple Watch Series 11、次世代Macなど。 Kuo氏は昨年7月の段階で、Apple IntelligenceがiPhoneのアップグレードを促進するという期待は「楽観的すぎる」可能性が高いと述べており、今年1月にはさらに、Appleが6月にApple Intelligenceの機能を披露してから10月に開始されるまで時間がかかったために、Apple Intelligenceの魅力は「著しく低下した」と述べていました。 Apple IntelligenceのSiriの遅延によってAppleが直面している否定的な人々の反応は、今後数ヶ月のiPhone 16とiPhone 17モデルの販売にも影響を与える可能性があります。

コメント

タイトルとURLをコピーしました