Linux Mint has confirmed it is switching to a longer development cycle, in order to give the team more time to ‘fix bugs and improve the desktop’. As a result, the Linux Mint 23 release is now slated to launch in December 2026. It will, among other planned changes, use the same installer as LMDE (Linux Mint Debian Edition) as this offers better OEM install, SecureBoot and LVM/LUKS support. Project lead Clement Lefebvre intimated that upending the distro’s standard twice-yearly release model was needed in February, noting that “…one of our strengths is that we’re doing things incrementally and changing […]
Wenn die jährliche Statistik zur Kriminalität veröffentlicht wird, folgen oft rassistische Hetze und Rufe nach Strafrechtsverschärfungen. Dabei sagt die Statistik weniger darüber aus, wie die Sicherheitslage im Land wirklich ist, als viele vermuten.
Die Kriminalstatistik wird jährlich unter großem Interesse veröffentlicht. – Alle Rechte vorbehalten IMAGO / IPON
Jedes Jahr im Frühling veröffentlicht das Bundeskriminalamt die Polizeiliche Kriminalstatistik. Sie wird von Vielen wahrgenommen als genaues Abbild von Kriminalität im Land und dementsprechend oft auch politisch instrumentalisiert. Dabei ist die Polizeiliche Kriminalstatistik (PKS) zuallererst eine Art Arbeitsbericht der Polizei. Sie ist mit Vorsicht zu genießen.
In der PKS werden mutmaßliche Straftaten erfasst, welche die Polizei an die Staatsanwaltschaften weitergibt. Ob diese dann die Verfahren einstellen und ob jemand verurteilt wird, erfahren wir aus dieser Statistik nicht. Wir erfahren auch nicht, ob die Steigerung einer Kriminalitätsart darauf beruht, dass die Polizei ihren Schwerpunkt verlagert hat, die Ermittlungsmethoden besser wurden, das Dunkelfeld sich aufhellt oder Menschen bereitwilliger Straftaten anzeigen.
Zudem gibt es rassistische Einstellungen, die dazu führen, dass bestimmte Bevölkerungsgruppen von der Polizei stärker kontrolliert oder von der Mehrheitsbevölkerung öfter angezeigt werden, was deren Präsenz in der Statistik erhöht. Das alles verzerrt die Polizeiliche Kriminalstatistik.
Polarisierung und Stigmatisierung
Die Statistik steht wegen wegen ihrer Anfälligkeit zur Instrumentalisierung und auch rassistischen Stimmungsmache in der Kritik. „Die polizeiliche Kriminalstatistik ist als Instrument zur Bewertung der Sicherheitslage ungeeignet“, hieß es in einem offenen Brief (PDF), den zahlreiche Menschenrechtsorganisationen und Kriminolog:innen vergangenes Jahr unterzeichnet haben. Vielmehr trage die Statistik zur Polarisierung der Gesellschaft und Stigmatisierung bestimmter Bevölkerungsgruppen bei, so die Unterzeichnenden.
Das Medienhaus Correctiv hat nun beliebte Mythen und Instrumentalisierungen rund um die Kriminalstatistik einem Faktencheck unterzogen. Der Artikel weist zum Beispiel darauf hin, dass eine Verbindung von polizeilicher Statistik und Justizstatistiken überfällig wäre. Dann könnte man nachvollziehen, wie häufig aus der polizeilichen Erfassung als Straftat am Ende auch eine Verurteilung vor Gericht wird. Den wie Correctiv schreibt, wurden laut statistischem Bundesamt 60 Prozent der Ermittlungsverfahren eingestellt.
Der Faktencheck zeigt auch, dass die Zahl der Fälle mit dem Bevölkerungswachstum zu tun hat. Mehr Tatverdächtige heiße nicht notwendigerweise, dass auch prozentual mehr Menschen kriminell geworden seien. Der prozentuale Wert wird mittels der „Tatverdächtigenbelastungszahl“ statistisch erfasst. Correctiv hat nun diesen Wert angeschaut und kommt zum Schluss:
[..] obwohl die absolute Zahl der Tatverdächtigen in den vergangenen Jahren zwischendurch gestiegen ist (etwa zwischen 2013 und 2015 oder zwischen 2021 und 2023): Der Anteil der Menschen in der Gesellschaft, die von der Polizei eines Verbrechens verdächtigt wurden, ist seit 2009 insgesamt gesunken.
Ein anderes Feld im Bericht von Correctiv sind Straftaten gegen die sexuelle Selbstbestimmung. Diese sind in den vergangenen Jahrzehnten laut der PKS dramatisch angestiegen. Die Hintergründe sind dabei vor allem auch ein besserer gesetzlicher Schutz vor solchen Straftaten durch eine Erweiterung dessen, was überhaupt strafbar ist, sowie eine gestiegene Sensibilität, so dass mehr Menschen und vor allem Frauen sich trauen, solche Straftaten auch anzeigen. Dennoch ist das Dunkelfeld in diesem Gebiet immer noch groß, so kam kürzlich heraus, dass bei digitaler Gewalt etwa 97 Prozent aller Fälle nicht zur Anzeige kommen.
Demografische und soziale Aspekte ignoriert
Beliebt in rassistischer Stimmungsmache, die ja derzeit von AfD bis zum Bundeskanzler Konjunktur hat, ist das Narrativ von kriminellen Migrant:innen. Die PKS erfasst keinen Migrationshintergrund, wer einen deutschen Pass hat, ist deutsch – egal, wo die Person geboren ist oder welche Nationalität die Eltern hatten. Die in der PKS erfassten Nicht-Deutschen hingegen sind nicht nur Geflüchtete oder hier lebende Menschen ohne deutschen Pass, sondern auch Tourist:innen. Wer also mit Fallzahlen aus der PKS hantiert, um rechte Stimmung zu machen, ignoriert nicht nur das, sondern auch, dass bei den Nicht-Deutschen die Altersstruktur jünger ist und jüngere Menschen, egal von wo sie kommen, in der Regel mehr Straftaten begehen. In Lagebildern erfasst das Bundeskriminalamt hingegen erst seit 2015 die Kriminalität von „Zuwanderern“, laut dem BKA Menschen, die ein Asylverfahren durchlaufen, einen Aufenthaltstitel oder eine Duldung haben oder abgeschoben werden sollen.
Rechtsradikale hantieren gerne mit der sogenannten „Messerkriminalität“, die angeblich seit dem Jahr 2015 explodiert sei. Das ist unseriös, wie Correctiv darlegt: Denn „Tathandlungen, bei denen der Angriff mit einem Messer unmittelbar gegen eine Person angedroht oder ausgeführt wird“ werden erst seit dem 1. Januar 2024 vollständig erfasst. 90 Prozent der erfassten „Messerkriminalität“ wird von Männern über 21 Jahren verübt. Tatsächlich sind dabei in der Statistik nicht-deutsche Täter überrepräsentiert. Neben demografischen und sozialen Aspekten gibt es zahlreiche weitere Faktoren, die dieses Ungleichgewicht begünstigen, wie die Kriminologin Gina Wollinger darlegt.
Verzerrte Wahrnehmung von Kriminalität
Die Gefahr einer Verfälschung des Kriminalitätsbildes durch die polizeiliche Statistik wird verstärkt dadurch, dass sehr viele Menschen eine völlig von der Realität abgekoppelte Wahrnehmung der Kriminalität haben. In einer Befragung der Konrad-Adenauer-Stiftung (KAS) aus dem Jahr 2021 gingen fast zwei Drittel von einer starken bis sehr starken Zunahme der Kriminalität in den letzten fünf Jahren aus, während nur sechs Prozent der Befragten die Kriminalitätsentwicklung realistisch einschätzten.
Woher diese Fehlwahrnehmung kommt, ist nicht abschließend untersucht. Die Studie der KAS zeigte sich hier einigermaßen ratlos: Die Sorge vor einer Zunahme der Kriminalität lasse sich nicht mit sinkender tatsächlicher Kriminalität aus der Welt schaffen, stellten die Autor:innen damals fest.
Einen Anteil an diesem Phänomen haben vermutlich die Nachrichtenwertfaktoren Negativität und Nähe, die Medien dazu bringen, Berichte mit Schaden und Kriminalität in unserer Nähe als relevanter zu bewerten. So entsteht medial eine Schieflage, die nicht der realen Entwicklung entspricht. Diese Schieflage wird befeuert von einer Innenpolitik, die auf diese Fehlwahrnehmungen eingeht, was die Berichterstattung zum Thema Sicherheit weiter verstärkt. Hinzu kommt auch ein Altersfaktor, den die Studien bestätigen: Je älter die Befragten, desto mehr Angst haben sie vor Kriminalitätszunahme. In einer alternden Gesellschaft steigt also die Furcht und damit auch der politische Druck auf das Thema.
Eng gefasster Sicherheitsbegriff
Der Fokus des Sicherheitsbegriffs auf Delinquenz und Kriminalität trägt zudem zu einer weiteren Verzerrung und damit verschobenen politischen Prioritäten bei. Ein erweiterter Sicherheitsbegriff würde soziale Sicherheit hervorheben, dazu gehören Wohnraum, eine gerechte Vermögensverteilung, ein gutes Gesundheitssystem, einfache Mobilität sowie die Absicherung von Arbeitsplätzen. Ein noch weiter in die Zukunft gerichteter Sicherheitsbegriff würde auch die Gefahr von Kriegen, Umweltkatastrophen und Klimakollaps in den Fokus nehmen.
Die Arbeit von netzpolitik.org finanziert sich zu fast 100% aus den Spenden unserer Leser:innen. Werde Teil dieser einzigartigen Community und unterstütze auch Du unseren gemeinwohlorientierten, werbe- und trackingfreien Journalismus jetzt mit einer Spende.
The big new, and it’s good, is coming from France. The government’s digital agency DINUM is moving its workstations from Windows to Linux, with every French ministry required to submit a plan by Autumn 2026 to reduce dependence on non-European software.
Two very different directions. Both worth paying attention to.
Here are other highlights of this edition of FOSS Weekly:
A new Linux kernel release.
France replacing Windows with Linux.
Microsoft locking out open source developers.
And other Linux news, tips, and, of course, memes!
This edition of FOSS Weekly is supported by Aiven.
Aiven just launched a permanent free tier for OpenSearch, offering a fully managed, persistent playground for your projects. With 4GB RAM and 20GB storage, it’s specifically engineered for the memory-heavy demands of AI: support for k-NN indexing, vector search, and RAG pipelines.
No credit card required and no trial limits. What else can you ask for?
Two related kernel AI stories this week. First, Linux has shipped an official AI coding assistants policy where AI help is allowed, but every patch needs a human accountable for it. Second, Greg Kroah-Hartman has been running what looks like an AI-assisted fuzzer on the kernel in a branch he calls "clanker."
A bug report filed in 2005 asking for per-screen virtual desktops in KDE has finally been addressed. The feature lets each monitor show a different virtual desktop independently rather than all switching together.
Linux 7.0 landed this week with a wide spread of improvements. Intel gets Nova Lake audio and better Arc GPU temperature reporting. AMD gets early Zen 6 performance profiling support and GPU groundwork for future hardware.
🧠 What We’re Thinking About
Session has lost all its paid developers and is running on volunteers. Donations are keeping the infrastructure alive until July 8, but development is effectively frozen unless they reach their $1 million donation goal.
And if you are absolutely new to Linux, it helps to start with the basics first. Not commands, but the kind of foundational things that make your early terminal experience far less confusing.
Once you get comfortable with the essentials, you might start exploring distributions more deeply. But not all rolling release distros are made equal. Arch gives you everything and expects you to handle it. Manjaro smooths the edges. Void is independent and leans stable. Gentoo compiles everything. Which one would you go for?
And somewhere along that journey, you’ll inevitably hit the classic fork in the road: Vim or nano. Nano works exactly like you'd expect a text editor to work, with controls visible on screen. Vim, on the other hand, runs on modes, muscle memory, and a learning curve that takes real commitment.
📚 Linux eBook bundle (ending this week)
No Starch Press needs no introduction. They have published some of the best books on Linux. And they are running an ebook bundle deal on Humble Bundle.
Plus, part of your purchase supports Electronic Frontier Foundation (EFF).
👷 AI, Homelab and Hardware Corner
At some point every homelab stops being manageable by memory alone. Our roundup of dashboard tools is the answer to that.
Tired of AI fluff and misinformation in your Google feed? Get real, trusted Linux content. Add It’s FOSS as your preferred source and see our reliable Linux and open-source stories highlighted in your Discover feed and search results.
Firefox has a native color picker called Eyedropper that helps you know the exact hex color code of a specific color on a webpage. It is available inside Menu -> More Tools -> Eyedropper.
You can also right-click on an empty place in the toolbar and select "Customize Toolbar..."
Here, drag and drop the "Developer" tool to the toolbar. Now, you can access the Eyedropper from this button as well.
🗓️ Tech Trivia: On April 16, 1959, John McCarthy gave the first public presentation of LISP at MIT. The list-processing language he built from scratch became the foundation of artificial intelligence programming and introduced concepts like garbage collection still used today.
🧑🤝🧑 From the Community: One of our regular FOSSers has posted about Hardware Freedom Day 2026; are you celebrating?
This quiz is simple. You'll be presented with a few Linux distros and their details. The twist is that they might not be a real thing. They could just be a fragment of my imagination.
Of course, this is valid only at the time when I created this quiz. The way we move in Linux world, there could be some new distros coming up right after I publish this quiz 😃
🚧
Some browsers block the JavaScript-based quiz units. Disable your ad blocker to enjoy the quizzes and puzzles.
The U.S. has been quietly building up a set of state-level laws that push operating system providers into the age verification plague.
California's AB 1043, signed in October 2025, requires OS providers to collect age data at account setup and pipe it to apps through a real-time API. It kicks in on January 1, 2027.
Colorado is working on something nearly identical. SB26-051 (which we covered when it was still a proposal) passed the state Senate 28-7 on March 3, 2026, and is now waiting on a House vote to become law there too.
However, these are just state-level laws. A new federal bill, H.R.8250, introduced on April 13, 2026, by Rep. Josh Gottheimer, with Rep. Elise M. Stefanik signing on as cosponsor, has us intrigued.
The official title of the bill reads, "To require operating system providers to verify the age of any user of an operating system, and for other purposes." But that's a mouthful; the short version is "Parents Decide Act."
If you go by the full title, the bill is pretty self-explanatory; it is going to require every operating system provider to verify the age of its user who wants to use their OS, and vaguely enough, for any "other purposes."
It has been referred to the House Committee on Energy and Commerce and currently sits at step one (Introduced) of five in the legislative process. No bill text has been published; there's no summary, no subject tags, and no related bills attached to it.
That means right now, the only thing formally known about H.R.8250 is its title, its sponsors, and where it got sent.
But wait, do you… 👇
Want more details?
Gottheimer's office published a press release on April 2, 2026, announcing the bill 11 days before it was formally introduced. That press release was unavailable for a while, but it is now back up.
According to the announcement, the bill would require OS developers to verify user age at device setup, allow parents to set content controls right there, and have those settings flow through to apps and platforms on the device.
Apple and Google were the companies Gottheimer named as the intended targets, with the framing centered entirely around phones and tablets.
But here's where it gets interesting for anyone outside the Apple and Google ecosystem. Gottheimer's press release framed this entirely around commercial mobile platforms. The official bill title, as you saw earlier, does not.
If the bill text matches the breadth of that title, Linux distributions and other open source operating platforms would sit squarely within its scope. And a federal bill passing would mean one nationwide compliance requirement replacing the current state-by-state situation.
The representative also voiced support for several groups, which include the likes of:
Evidently, things are getting more absurd with each passing day, and I can't wait for the day when access to anything electronic is locked behind a gate, guarded by the most decent and righteous upholders of the law. /s
💬 If you are looking for a conversation surrounding this, our forum is the place to be!
Back in 2005, a bug report was filed by Kjetil Kjernsmo, then running KDE 3.3.2 on Debian Stable. He wanted the ability to have each connected screen show a different virtual desktop independently, rather than having all displays switch as one unit.
Over the years, over 15 duplicate reports piled onto the original as more people ran into the same wall. And that's not a surprise, because multi-monitor setups have become increasingly common.
The technical reason why this issue stayed open this long comes down to X11. Implementing it there would have required violating the EWMH specification, which has no concept of multiple virtual desktops being active at the same time.
The KWin maintainer Martin Flöser had said as much in 2013, effectively ruling it out for the entire KDE 4.x series. The only realistic path was through Wayland, and that path needed someone willing to actually walk it.
Someone finally did. The feature has now landed in KWin's master branch and is set for a Plasma 6.7 introduction.
How was this accomplished?
Video courtesy of Hynek Schlindenbuch.
The merge request was opened by Hynek Schlindenbuch, a developer with no prior KDE contributions.
Each screen now independently tracks which virtual desktop it is showing. Any desktop can appear on any screen, and the same one can be shown on multiple screens at once. Windows belong to a specific screen, even if they visually span two, and can be assigned to one or more virtual desktops.
A window stays visible when its screen is showing one of those desktops. Keyboard shortcuts only switch the desktop on the currently active screen, not across all of them at once.
Unlike Hyprland, switching to a desktop does not pull focus to that desktop's screen. Hynek made that choice deliberately.
VirtualDesktopManager tracks the current desktop separately for each output, and switching all screens together remains the default, with per-output switching available as an opt-in via settings.
Keep in mind that this fix is Wayland only. X11 was left out intentionally since it relies on the EWMH protocol, and with X11 support being dropped in Plasma 6.8 anyway, that is a less significant shortcoming than it sounds.
If you were curious about Hynek, he is a full-time PHP programmer with over six years of experience. His C++ background going into this project was minimal, and he had no experience with Qt or CMake and had only set up KDE Plasma on an old laptop a few months before opening the merge request.
The motivation for this was his plan to move to Wayland for fractional scaling support, but the missing per-screen desktop functionality was blocking his switch to Plasma.
See how a lone open source developer's initiative changes things for the rest of us? 🙃
Natalie Vock (pixelcluster), a developer who works on low-level Linux code and as an independent contractor for Valve, has published a fix for a VRAM management problem that has been making life difficult for Linux gamers on AMD GPUs with 8GB of VRAM or less.
She has put together a combination of kernel patches and userspace utilities that stop background apps from stealing VRAM away from whatever game you're playing.
The underlying issue is that when VRAM runs out, the kernel driver has no way to tell which memory matters more. A game and a browser tab look identical from the driver's perspective, so when something has to give, game memory often takes the hit.
It then ends up in GTT, a chunk of system RAM that the GPU can access, but over the PCIe bus rather than directly.
The fix is built on the dmem cgroup controller that she co-developed with Maarten Lankhorst from Intel and Maxime Ripard from Red Hat. It is already in the mainline Linux kernel, and it lets the driver treat foreground apps as higher priority when handing out VRAM.
That alone was not enough, though. Natalie has also written six kernel patches to fix a specific gap where VRAM pressure would cause new memory allocations to skip those protections entirely and end up in GTT anyway.
Two userspace utilities handle the rest: dmemcg-booster sets up the groundwork so the kernel protections actually activate, and a fork of KDE Plasma's Foreground Booster keeps track of which app is in the foreground so it gets first dibs on VRAM.
What this means for Linux gamers
Instead of performance slowly degrading over a session, games should now hold steady for as long as their own VRAM usage stays within budget. Natalie notes that most modern titles tend to stay within 8GB, so owners of 8GB GPUs should be in a much better spot with today's games.
While this applies to any GPU running the amdgpu driver, Intel GPUs on the xe driver have the necessary kernel support too, though real-world testing there is still pending.
Additionally, the developer has submitted a patch for nouveau, the open source NVIDIA driver.
How to get it
🚧
The developer warns that things could break if you install the patches. Proceed with caution on general use of production machines.
The six kernel patches are not in the mainline kernel, so getting them requires some extra steps depending on your setup. CachyOS users on Linux 7.0rc7-2 or later are already covered.
On other Arch-based distros, both utilities are in the AUR. For the kernel side, you can either pull the CachyOS kernel package from the repository or install linux-dmemcg from the AUR, which compiles Natalie's development branch.
The six patch files are also linked directly in the announcement blog for anyone who wants to apply them to a custom kernel build.
For those not on an Arch-based system, the realistic options are applying the patches manually to a self-compiled kernel or waiting for your distro to pick them up. Natalie has said her post will be updated if and when the work gets packaged by other distributions.
The Linux kernel project has spent quite some time navigating the use of AI tools, and the response usually has been somewhere between "figure it out yourself" and "we'll get back to you."
Late last year, at the 2025 Maintainers Summit, Sasha Levin pushed for some documented consensus, and what came out of it was human accountability for patches being non-negotiable, purely machine-generated submissions not being welcome, and tool use being disclosed.
He promised to put something in writing without committing to enforce it, and that work has now shipped with Linux 7.0.
The new document is called AI Coding Assistants and lives in the kernel's process docs alongside the rest of the contribution guidelines. The short version is that AI-assisted contributions still need to comply with GPL-2.0-only; AI agents cannot add Signed-off-by tags; and patches that had AI help should carry an "Assisted-by" tag.
The Developer Certificate of Origin (DCO) is an important aspect that exists, so there is a human accountable for every patch. AI assistance does not change that hard requirement.
Basically, the human submitter reviews everything the AI produced, confirms it meets licensing requirements, and puts their own name on it with an appropriate mention that AI was used.
The Assisted-by tag format is Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]; for scenarios where either single or multiple tools were used. The document gives Assisted-by: Claude:claude-3-opus coccinelle sparse as an example.
Back then, Linus was not even convinced a dedicated tag was necessary and suggested the changelog body would do the job. But now, the kernel community seems to have settled on the tag anyway.
It's already in use
We covered this earlier in the week, but Greg Kroah-Hartman (GKH) seems to have had AI-assisted fuzzing running in his kernel tree for a while now, in a branch called "clanker." He started with the ksmbd and SMB code, found some potential issues, and submitted fixes with a note telling reviewers to verify everything independently before trusting any of it.
That is just about the workflow the new policy was written around. AI surfaces issues, a human with decades of kernel experience decides what is real, writes the fix, and takes responsibility. GKH being the one doing it is not a surprise given he is the stable kernel maintainer and has probably dealt with more bad patches than the others.
Other projects have gone in a different direction. Gentoo banned AI-generated contributions entirely in 2024, with its council citing copyright risk, code quality, and ethical concerns.
NetBSD's commit guidelines put LLM-generated code in the "tainted code" category, requiring written approval from the core developers before any of it goes in.
In contrast, Linux is not banning anything. Whether that turns out to be the sensible call or just a lenient one will depend on how seriously people actually take the "a human reviewed this" part.
The development of the Linux kernel moves fast, and the 7.0 release is no exception. Around the same time as this release, a patch queued for Linux 7.1 has kicked off what will eventually be the end of i486 CPU support in the kernel.
But that's a story for another time. For now, let's focus on what Linux 7.0 brings to the table.
The last week of the release continued the same "lots of small fixes" trend, but it all really does seem pretty benign, so I've tagged the final 7.0 and pushed it out.
I suspect it's a lot of AI tool use that will keep finding corner cases for us for a while, so this may be the "new normal" at least for a while. Only time will tell.
This coverage is based on the detailed reporting from Phoronix.
Linux Kernel 7.0: What's New?
The release is here, and before getting into the improvements, there is one thing worth getting out of the way first.
This is not a long-term support release. If your priority is stability and extended maintenance, this is not the kernel to land on. Instead, you could opt for Linux kernel 6.18, which is supported until December 2028.
Intel Upgrades
Linux 6.19 already added audio support for Intel Nova Lake S, but the standard Nova Lake (NVL) variant was left out. That's fixed in 7.0, and the difference between the two in terms of specs is mainly in core count (4 vs. 2).
Intel Arc users get something useful too. The Xe driver now exposes a lot more temperature data through the HWMON interface. Previously you got a single GPU core reading; now you get shutdown, critical, and max temperature limits, plus memory controller, PCIe, and individual vRAM channel temperatures.
Also for Panther Lake, GSC firmware loading and Protected Xe Path (PXP) support are in.
And lastly, Diamond Rapids (the upcoming Xeon successor to Granite Rapids) gets NTB driver support, which handles high-speed data transfers between separate systems over PCIe. It is expected to be helpful for distributed storage and cluster setups.
AMD Refinements
While the Zen 6 series of CPUs are still a while out, the kernel is already getting ready for it. Linux 7.0 merges perf events and metrics support for AMD Zen 6, covering performance counters for branch prediction, L1 and L2 cache activity, TLB activity, and uncore events like UMC command activity.
All of that is mainly useful for developers and admins doing performance profiling ahead of launch, and not something the average user will notice.
For virtualization, KVM picks up support for AMD ERAPS (Enhanced Return Address Predictor Security), a Zen 5 security feature. In VM scenarios, this bumps the Return Stack Buffer from 32 to 64 entries, letting guests make full use of the larger RSB.
AMD is also laying the groundwork for next-gen GPU hardware in 7.0, enabling new graphics IP blocks for what looks like an upcoming RDNA 4 successor and another RDNA 3.5 variant.
There are also hints of deeper NPU integration with future Radeon hardware, but AMD hasn't announced anything yet, so exact product details remain a mystery for now.
Better Storage Handling
XFS gets one of the more interesting additions this release called autonomous self-healing. A new xfs_healer daemon, managed by systemd, watches for metadata failures and I/O errors in real time and triggers repairs automatically while the filesystem stays mounted.
Btrfs picks up direct I/O support for block sizes larger than the kernel page size, falling back to buffered I/O when the data profile has duplication. There's also an experimental remap-tree feature, which introduces a translation layer for logical block addresses that lets the filesystem handle relocations and copy-on-write operations without physically moving or rewriting blocks.
EXT4 sees better write performance for concurrent direct I/O writes to multiple files by deferring the splitting of unwritten extents to I/O completion. It also avoids unnecessary cache invalidation and forced ordered writes when appending with delayed allocation.
Miscellaneous Changes
Wrapping up this section, we have some other notable changes that made it into this release:
WiFi 8 Ultra-High Reliability (UHR) groundwork lands in the networking stack.
Security bug report documentation gets an overhaul to help AI tools send more actionable reports.
Rust support is officially no longer experimental, with the kernel team formally declaring it is here to stay.
ASUS motherboards, including the Pro WS TRX50-SAGE WIFI A and ROG MAXIMUS X HERO, now have working sensor support.
Installing Linux Kernel 7.0
As always, those on rolling distros like Arch Linux and other distros like Fedora and its derivatives will get this new release very soon. For others on distros like Debian, Linux Mint, Ubuntu, MX Linux, etc. You will most likely not receive this upgrade.
If that doesn't work for you, then you could always install the latest mainline Linux kernel on your Ubuntu setup. And, this goes without saying, this is risky. If you end up borking your system, we are not to blame for it.
Linux Mint is known for being simple and beginner friendly. It works out of the box with most essential features ready to use, so you don’t have to spend time setting things up. One such basic task is taking screenshots, and Mint makes it very easy even if you are completely new to Linux.
In this beginner's guide, we will look at the built-in screenshot tool in Linux Mint and the keyboard shortcuts you can use right away.
📋
This article is part of the Linux Mint beginner's tutorial series.
The GUI screenshot tool that you don't want to miss
Linux Mint provides a simple graphical interface for those who prefer a GUI solution for taking screenshots.
Beyond the basic options, the tool also includes a few useful features. Let’s take a look at them next.
First, open the Screenshot tool by searching for it in the start menu.
Open Screenshot Tool
💡
You can pin the Screenshot app to the taskbar for quick access.
The interface is simple and easy to understand. There are three main options:
Capture Screen: Takes a screenshot of the entire screen
Capture Window: Captures the active window
Capture Selection: Lets you select a specific area using left-click and drag to capture.
Screenshot Tool Interface
After choosing the method, click the Take Screenshot button at the top left of the window.
Show mouse cursor in screenshot
In the Screenshot tool, you will find an option called Show Pointer. Enable this if you want the mouse pointer to be visible in your screenshots.
Show Pointer
Take screenshot with a delay
You can also set a small delay before taking a screenshot.
🚧
This does not apply to keyboard shortcuts by default.
In the Screenshot tool, enter a value in seconds under the Delay in Seconds option.
Add a Delay to Screenshot
Once set, the tool will wait for the specified time before capturing the screenshot when using the GUI. For example, if you set it to 5 seconds, the screenshot will be taken after a 5 second delay.
💡
One common use case for delay is capturing the mouse cursor in window or area screenshots. Without a delay, the screenshot is taken instantly, so you do not get time to move the cursor from the Screenshot tool to the target application or position it properly.
Using keyboard shortcuts
If you prefer not to open a GUI app every time you take a screenshot, that is not a problem. Linux Mint provides keyboard shortcuts that let you quickly capture the screen in different ways.
Take the screenshot of entire screen
You can press the PrtScr key on your keyboard to capture the entire screen.
After taking the screenshot, you will be prompted to either save it with a name or copy it to the clipboard. This works well for basic use.
However, this can feel limited if you only want to capture a small part of the screen. The good news is that Linux Mint also provides an easy way to do that.
Take the screenshot of an area
To take the screenshot of a specific area, use the Shift + PrtScr shortcut.
Your screen will dim slightly and the cursor will change to a plus sign. Click, hold, and drag to select the area you want to capture.
Once you release the mouse button, you can choose to copy the screenshot or save it.
🚧
Keep in mind that you cannot adjust the selection after releasing the click, so make sure to select the area carefully.
Take screenshot of a window
Sometimes, you may want to capture only the currently active window. While you can do this using the area selection method, using a shortcut is much more convenient.
Press Alt + PrtScr to take a screenshot of the active window.
There are a few things to keep in mind. If a menu is open inside the window, like a top menu or a right-click context menu, this shortcut may not work.
🚧
In my case, I was not able to use any of the screenshot shortcuts if the window in focus has a menu opened. In this case, you need to set a delay to take the screenshot, which we will see in a later section.
Also, if a dialog box is open, the tool will capture whichever window is active at that moment, whether it is the main window or the dialog.
Record the screen
Many people do not realize that Linux Mint also includes a built-in screen recorder. It is not visible in the menus, so it is easy to miss.
Press Shift + Ctrl + Alt + R to start recording your screen. Use the same key when a recording is active to stop recording.
This is a basic tool, so do not expect features like those in dedicated applications such as OBS Studio or SimpleScreenRecorder. It simply records your entire screen.
When you stop the recording, the video file is saved in the Videos folder inside your Home directory.
Custom Shortcuts
In the previous section, we saw that the GUI tool offers options like delay and showing the mouse pointer, which are not available with the default keyboard shortcuts.
However, this does not mean you are limited. In Linux Mint, you can create custom shortcuts to include these actions as well.
The screenshot options
Before setting up custom screenshot shortcuts, it helps to understand the available options. Linux Mint uses the GNOME Screenshot tool for both the GUI and keyboard based screenshots.
GNOME Screenshot provides several useful options, along with many more that you can explore in its man page.
gnome-screenshot -w: Take the screenshot of current active window.
gnome-screenshot -a: Take the screenshot of a select region by click and drag.
gnome-screenshot -d 5: Add a 5 second delay before taking a screenshot of the entire screen.
gnome-screenshot -d 5 -p: Apply a 5 second delay and include pointer in the screenshot.
gnome-screenshot -d 5 -a, gnome-screenshot -d 5 -w: Take screenshot of select area/window respectively with a 5 second delay.
Setting custom screenshot shortcuts
Search for and open Keyboard from the start menu.
Open Keyboard Application
Go to the Shortcuts tab and then select Custom Shortcuts. Click on the Add custom shortcut button.
Add Custom Shortcut
Now, enter a name for the shortcut. For example, you can use "Take screenshot of an area with a delay" in the Name field.
Enter name and command
In the command field, enter the required command. For example, use gnome-screenshot -d 5 -a, and then click the Add button.
The command will now be listed. To assign a shortcut, select it under Keyboard shortcuts and click on the Unassigned option in the Keyboard bindings section.
0:00
/0:13
Add the keybinding to the custom command.
You will be prompted to press a key combination. Press the shortcut you want to use.
You can repeat the same steps to create and assign shortcuts for other commands based on your needs.
Other screenshot tools
Sometimes, basic screenshots are not enough. You may want to annotate an image or add borders and other adjustments.
These are image editing features, and they are not available in the default Screenshot tool in Linux Mint.
For such needs, you can use third party screenshot tools that offer more control and customization.
We have a separate article that covers screenshot tools you can use in more detail. You can refer to it to find options that suit different needs and use cases.
As a quick note, Flameshot and Ksnip are two good screenshot tools you can use for editing and customization. You can also use Gradia that also provides basic editing.
Did you find it useful? Feel free to share your thoughts in the comments.
If you care about privacy and don't take too well to governments and Big Tech companies snooping on your messages, then Session has probably come up at some point. It's a free, open source, end-to-end encrypted messaging app that doesn't ask for your phone number or email to sign up.
Messages are routed through an onion network rather than a central server, and the combination of no-metadata messaging, anonymous sign-up, and decentralized architecture has earned it a loyal following among privacy-conscious users.
Unfortunately, the project has sent out a mayday call as it risks closure.
A call for help
Your donations have helped, and the Session Technology Foundation (STF) has received enough funding to support critical operations for 90 days.
This means that Session will remain available on the app stores and essential services (such as the file server and push notification…
The Session Technology Foundation (STF) sent out what can only be described as a distress signal, announcing that the app's survival is now in serious peril. The day it was posted on was also the last working day for all paid staff and developers at the STF.
From that point on, Session is being kept running entirely by volunteers.
The donations that they received earlier are enough to keep critical infrastructure online until July 8, but not nearly enough to retain a development team. With nobody left on payroll, development has been paused.
Due to that, introducing new features is off the table, existing bugs will most likely go unaddressed, and the STF says new releases are unlikely during this period.
Session co-founder Chris McCabe had already flagged the trouble coming. In a personal appeal published earlier in March, he wrote that the organizations safeguarding Session had faced many challenges over the years and that the project's very survival was now at risk.
He had concluded by appealing that:
The project is on a path to self-sustainability, but the future is fragile. If every Session user contributed just one dollar, it would go a long way towards Session reaching sustainability. If you've ever considered donating, now is the time to act.
The above didn't accomplish enough to change the outcome, so the Session folks had to sound the alarm. The foundation says it needs $1 million to complete the work still in progress.
That includes Protocol v2, which adds forward secrecy (PFS), post-quantum cryptography, and improved device management, as well as Session Pro, a subscription tier intended to put the project on a self-sustaining footing.
If that goal is hit, the STF says it hopes Session could stand on its own without needing to go back to the community for more.
As of writing, $65,000 of that $1 million has been raised. Anyone who wants to see this privacy-focused messaging app survive, especially at a time when surveillance is only getting worse, can donate at getsession.org/donate.
France's national digital directorate, DINUM, has announced (in French) it is moving its workstations from Windows to Linux. The announcement came out of an interministerial seminar held on April 8, organised jointly by the Directorate General for Enterprise (DGE), the National Agency for Information Systems Security (ANSSI), and the State Procurement Directorate (DAE).
The Linux switch is not the only move on the table. France's national health insurance body, CNAM, is migrating 80,000 of its agents to a set of homegrown tools: Tchap for messaging, Visio for video calls (more on this later), and France transfert for file transfers.
The country's national health data platform is also set to move to a sovereign solution by the end of 2026.
Beyond the immediate moves, the seminar laid out a broader plan. DINUMwill coordinate an interministerial effort built around forming coalitions between ministries, public operators, and private sector players, with interoperability standards at the core (the Open Interop and Open Buro initiatives are specifically named).
Every French ministry, including public operators, will be required to submit its own non-European software reduction plan by Autumn 2026.
The plan is expected to cover things like workstations, collaboration tools, antivirus, AI, databases, virtualization, and network equipment. A first set of "Industrial Digital Meetings" is planned for June 2026, where public-private coalitions are expected to be formalized.
Speaking on this initiative, Anne Le Hénanff, Minister Delegate for Artificial Intelligence and Digital Affairs, added that (translated from French):
Digital sovereignty is not optional — it is a strategic necessity. Europe must equip itself with the means to match its ambitions, and France is leading by example by accelerating the shift to sovereign, interoperable, and sustainable solutions.
By reducing our dependence on non-European solutions, the State sends a clear message: that of a public authority taking back control of its technological choices in service of its digital sovereignty.
You might remember, a few months earlier, France set out on a similar path for video conferencing. The country mandated that every government department switch to Visio, its homegrown, MIT-licensed alternative to Teams and Zoom by 2027.
Part of the broader La Suite Numérique initiative, it had already been tested with 40,000 users across departments before the mandate was announced. So this move looks like an even more promising one, and we shall keep an eye on how this pans out.
With the rise of AI and humanoid robots, the word "Clanker" is being used to describe such solutions, and rightly so. In their current state, these are quite primitive, and while they can act like something resembling human intelligence, they still can't match what nature cooked up.
Now that terminology has made its way into the Linux kernel thanks to Greg Kroah-Hartman (GKH), the Linux stable kernel maintainer and the closest thing the project has to a second-in-command.
He has been quietly running what looks like an AI-assisted fuzzing tool on the kernel that lives in a branch called "clanker" on his working kernel tree. Before you ask, fuzzing is a method of automated software testing that bombards code with unexpected, malformed, or random inputs to trigger crashes, memory errors, and other misbehavior.
It is a critical line of defense for a massive codebase like Linux.
How it started
It began with the ksmbd and SMB code. GKH filed a three-patch series after running his new tooling against it, describing the motivation quite simply. He picked that code because it was easy to set up and test locally with virtual machines.
What the fuzzer flagged were potential problems specific to scenarios involving an "untrusted" client. The three fixes that came out of it addressed an EaNameLength validation gap in smb2_get_ea(), a missing bounds check that required three sub-authorities before reading sub_auth[2], and a mechToken memory leak that occurred when SPNEGO decode fails after token allocation.
GKH was very direct about the nature of the patches, telling reviewers: "please don't trust them at all and verify that I'm not just making this all up before accepting them."
These pictures show the Clanker T1000 in operation.
It does not stop there. The clanker branch has since accumulated patches across a wide range of subsystems, including USB, HID, WiFi, LoongArch, networking, and more.
Who is GKH?
If you are not well versed with the kernel world, GKH is one of the most influential people in Linux development.
He has been maintaining the stable kernel branch for quite a while now, which means every long-term support kernel that powers servers, smartphones, embedded devices, and pretty much everything else running Linux passes through his hands.
He also wrote Linux Kernel in a Nutshell back in 2006, which is freely available under a Creative Commons license. It remains one of the more approachable references for anyone trying to understand kernel configuration and building, and it is long overdue for a new edition (hint hint).
He also mentioned running an internal AI experiment where the tool reviewed a merge he had objected to. The AI not only agreed with his objections but found additional issues to fix.
Linus called that a good sign, while asserting that he is "much less interested in AI for writing code" and more interested in AI as a tool for maintenance, patch checking, and code review.
AI should assist, not replace
There is an important distinction worth making here. What GKH appears to be doing here is not having AI write kernel code. The fuzzer surfaces potential bugs; a human with decades of kernel experience reviews them, writes the actual fixes, and takes responsibility for what gets submitted.
If that's the case, then this is the sensible approach, and it mirrors what other open source projects have been formalizing. LLVM, for instance, adopted a "human in the loop" AI policy earlier this year, requiring contributors to review and understand everything they submit, regardless of how it was created.
Microsoft has had a complicated relationship with the open source world. VSCode, TypeScript, and .NET are all projects it created, and its acquisition of GitHub put it in charge of the world's largest code hosting platform.
But it is also the same company that bakes telemetry into Windows by default and has been aggressively pushing Copilot AI into every corner of its software. That last part especially has been nudging a growing number of people toward open alternatives.
And now, a wave of developer account suspensions has given some open source developers a new headache.
What's happening?
Microsoft rolled out mandatory account verification for all partners enrolled in the Windows Hardware Program who had not completed verification since April 2024. The requirement kicked in on October 16, 2025, giving partners 30 days from notification to verify their identity with a government-issued ID.
Plus, that ID has to match the name of the Partner Center primary contact. Miss the deadline or fail verification, and your account gets suspended with no further submissions allowed.
This matters because signing Windows kernel drivers requires one of these accounts. Without it, developers cannot push driver-signed updates for Windows, and Windows will flag unsigned drivers, blocking them from loading at the kernel level.
Three major open source projects found this out the hard way. VeraCrypt, WireGuard, and Windscribe all had their developer accounts suspended, cutting off their ability to ship updates on Windows.
It appears @Microsoft is actively suspending developer accounts with no warning or reason of various security tools like VeraCrypt, WireGuard and also Windscribe. We've had this VERIFIED account for 8+ years to sign our drivers.
VeraCrypt developer Mounir Idrassi was the first to go public. In a SourceForge forum post, he wrote that Microsoft had terminated his account with no prior warning, no explanation, and no option to appeal.
Repeated attempts to reach Microsoft through official channels got him nothing but automated replies. The suspension hit his day job too, not just VeraCrypt.
WireGuard creator Jason Donenfeld hit the same wall a couple of weeks later, when he went to certify a new WireGuard kernel driver for Windows and found his account showing as access restricted. He eventually tracked down a Microsoft appeals process, but it carried a 60-day response window.
Windscribe's situation was arguably the messiest. The company says it had held a verified Partner Center account for over eight years and spent more than a month trying to sort things out before going public.
Moreover, once an account is suspended, Partner Center blocks users from opening a support ticket directly.
What now?
This eventually got Microsoft's attention as Scott Hanselman, VP and Member of Technical Staff at Microsoft and GitHub stepped in on X to say the accounts would be fixed. He pointed to the October 2025 blog post (linked earlier) and said the company had been sending emails to affected partners since then.
Scott confirmed he had personally reached out to both Mounir and Jason to get their accounts unblocked, and that fixes were already in progress.
Anyway, this doesn't look good, and leaving developers of critical security software without recourse for weeks only erodes trust. But, in the end, this won't really affect a behemoth like Microsoft, who has a dominating hold on the operating system market.
Linus Torvalds created two of the most widely used tools in modern computing: the Linux kernel and Git.
Git, of course, is a version control system primarily used by programmers.
But Theena makes a strong case that Git and plain text are the best tools a writer can use. Not just for backup but for building a writing practice that is truly their own..
At its core, the argument is about breaking free from platform dependency, long-term preservation, and treating your body of work as something worth designing around rather than just storing somewhere convenient.
Here are other highlights of this edition of FOSS Weekly:
sudo tips and tweaks.
Apt's new version has useful features.
Opera GX arriving as a gaming browser for Linux.
A Linux driver proposal to catch malicious USB devices.
And other Linux news, tips, and, of course, memes!
Tired of AI fluff and misinformation in your Google feed? Get real, trusted Linux content. Add It’s FOSS as your preferred source and see our reliable Linux and open-source stories highlighted in your Discover feed and search results.
Not open source software but Opera GX, the gaming-focused Chromium browser that's been on Windows and macOS for years, has finally landed on Linux. Sourav took the early access build for a spin and tested the features it's known for, like GX Control for capping RAM and CPU usage while gaming and GX Cleaner for cleaning up junk data.
The Linux kernel is finally dropping i486 support, queued for Linux 7.1. The first patch removes the relevant Kconfig build options, with a fuller cleanup covering 80 files and over 14,000 lines of legacy code still to follow.
Proton has launched two new things: Proton Workspace, a bundled suite of all their services aimed at businesses looking for a privacy-first alternative to Google Workspace or Microsoft 365, and Proton Meet, an end-to-end encrypted video conferencing tool using the open source MLS protocol.
A proposal has been submitted to the Linux kernel mailing list for a new HID driver called hid-omg-detect that passively monitors USB keyboard-like devices for suspicious behavior.
Another proposal, but for Fedora was recently struck down. It looked to move per-user environment variable management from shell RC files into systemd.
YOUR support keeps us going, keeps us resisting the established media and big tech, keeps us independent. And it costs less than a McDonald's Happy Meal a month.
Support us via Plus membership and additionally, you:
✅ Get 5 FREE eBooks on Linux, Docker and Bash ✅ Enjoy an ad-free reading experience ✅ Flaunt badges in the comment section and forum ✅ Help creation of educational Linux materials for everyone
No Starch Press needs no introduction. They have published some of the best books on Linux. And they are running an ebook bundle deal on Humble Bundle.
You can copy a file in Nautilus by pressing Ctrl+C, then press Ctrl+M to paste it as a symbolic link instead of an actual copy. This is a handy way to create a symlink without ever needing to open a terminal!
0:00
/0:15
🎋 Fun in the FOSSverse
In this members-only crossword, you will have to name systemd's ctl commands.
An appropriate meme on the OS-level age verification topic.
🗓️ Tech Trivia: On April 8, 1991, a small team at Sun Microsystems quietly relocated to work in secret on a project codenamed "Oak", a programming language that would eventually be renamed Java and go on to become one of the most widely used languages in the world, powering everything from Android apps to enterprise software.
APT, or Advanced Package Tool, is the package manager on Debian and its derivatives like Ubuntu, Linux Mint, and elementary OS. On these, if you want to install something, remove it, or update the whole system, you do it via APT.
It has been around for decades, and if you are on a Debian-based distro, then you have almost certainly used it without giving it much thought. That said, it has seen active development in the last couple of years.
We covered the APT 3.0 release this time last year, which kicked off the 3.x series with a colorful new output format, the Solver3 dependency resolver, a switch from GnuTLS/GnuPG to OpenSSL, and Sequoia for cryptographic operations.
The 3.1.x cycle that followed has now closed out with APT 3.2as the stable release, and it brings some notable changes with it.
What do you get with Apt 3.2?
The biggest additions with this release are transaction history with rollback support, some new commands, and per-repository package filtering.
APT now keeps a log of every package install, upgrade, and removal. You can view the full list with apt history-list, which shows all past operations with an ID assigned to each. To see exactly what packages were affected in a specific operation, you can use apt history-info <ID>.
From there, apt history-undo <ID>can be used to reverse a specific operation, reinstalling removed packages or removing installed ones as needed. If you undo something mistakenly and want it back, run apt history-redo <ID> to reapply it.
For cases where you want to revert everything back to the state at a particular point, apt history-rollback <ID> does that by undoing all operations that happened after the specified ID. Use this with care, as it makes a permanent change.
apt why and apt why-not are another set of new additions that let you trace the dependency chain behind a package. Run apt why <package> and APT will tell you exactly what pulled it onto your system. Run apt why-not <package> and it will tell you why it is not installed.
Similarly, Include and Exclude are two new options that let you limit which packages APT uses from a specific repository. Include restricts a repo to only the packages you specify, and Exclude removes specific packages from a repo entirely.
Solver3, which shipped as opt-in with APT 3.0, is now on by default. It also gains the ability to upgrade packages by source package, so all binaries from the same source are upgraded together.
Additionally, your system will no longer go to sleep while dpkg is running mid-install and JSONL performance counter logging is also in, though that is mostly useful for developers.
If all of that's got you interested, then you can try Apt 3.2 on a Debian Sid installation as I did below or wait for the Ubuntu 26.04 LTS release, which is reportedly shipping it.
How to use rollback on Apt?
I almost got lost in the labyrinth of Vim, unable to exit.
After installing some new programs using APT, I tested a few commands to see how rollback and redoing transactions worked. First, I ran sudo apt history-list in the terminal and entered my password to authorize the command.
The output was a list of APT transactions that included the preparatory work I had done to switch to Debian Sid from Stable, as well as the two install commands to get Vim and Nala installed.
Next, I ran sudo apt history-info 4, the number being the ID of the transaction, and I was shown all the key details related to it, such as the start/end time, requested by which user, the command used, and packages changed.
After that, I ran sudo apt history-undo 4 to revert the Vim installation and sudo apt history-redo 4 to restore the installation; both of these commands worked as advertised.
Finally, I tested sudo apt history-rollback 3 to get rid of Nala, and the process was just about the same as before, with me being asked to confirm changes by typing "Y".
When I tried to run apt history-redo for this one, the execution failed as expected.
💬 Do these new additions look useful to you? Can't be bothered? Let me know below!
Anthropic has handed the Apache Software Foundation (ASF) a $1.5 million donation. The money is earmarked for build and security infrastructure, project services, and community support.
If you have used the internet today, you have almost certainly touched something the ASF maintains. Some of its projects like Kafka, Spark, Cassandra, the Apache HTTP Server, are not some niche tools, but a critical part of the modern IT infrastructure.
The ASF does not sell anything. It runs on donations, and without sustained funding, the infrastructure behind all of that software does not maintain itself.
Anthropic's framing for the donation is essentially that AI runs on this stuff and someone has to fund it. As AI development moves forward more quickly, the open source foundations underneath it need to be in good shape to keep up.
On the topic, Ruth Suehle, President of the Apache Software Foundation, added that:
Open source software is the foundation of modern digital life — largely in ways the average person is completely unaware of — and ASF projects are a critical part of that. When it works, nobody notices, and that’s exactly the goal.
But that kind of reliability isn’t a given. It is the result of sustained investment in neutral, community-governed infrastructure by each part of the ecosystem. Support like Anthropic’s helps ensure long-term strength, independence, and security of the systems that keep the world running.
Similarly, Vitaly Gudanets, Chief Information Security Officer at Anthropic, said that:
AI is accelerating rapidly, but it’s built on decades of open source infrastructure that must remain stable, secure, and independent. Supporting the Apache Software Foundation is a direct investment in the resilience and integrity of the systems that modern AI — and the broader software ecosystem — depend on.
Some thoughts
You might remember Anthropic was part of a similar donation campaign back in March, when the Linux Foundation announced $12.5 million in grants to strengthen open source software security. Anthropic was one of seven contributors to that pool, alongside AWS, Google, Google DeepMind, GitHub, Microsoft, and OpenAI.
That funding was managed by Alpha-Omega and the Open Source Security Foundation (OpenSSF), with the goal of helping open source maintainers deal with the growing flood of AI-generated vulnerability reports they simply do not have the bandwidth to handle.
It is great to see open source receiving monetary support, but the smaller players who are equally important in the ecosystem also need to be supported better. Big donations like this tend to flow toward well-established foundations, while the countless smaller projects that hold up just as much critical infrastructure quietly struggle for resources.
Linux Mint devs announced that they will adopt a longer development cycle starting with the upcoming Linux Mint 23 release, as well as other important changes to the distribution.
Mir 2.26 compositor is now available for download with initial implementation of the Wayland frontend in the Rust, as well as many other Wayland improvements.