🔒
Es gibt neue verfügbare Artikel. Klicken Sie, um die Seite zu aktualisieren.
Gestern — 19. April 2026It's FOSS

Thunderbolt Wants to Do for AI Clients What Thunderbird Did for Email

19. April 2026 um 14:08

MZLA Technologies Corporation, the Mozilla Foundation subsidiary behind Thunderbird, has announced Thunderbolt, an open source, self-hostable AI client for organizations that want to run AI on their own infrastructure.

The project is funded through investment from Mozilla and is a standalone product, separate from Thunderbird, built by a different team within MZLA that's focused on enterprise AI products.

Offered under Mozilla Public License 2.0, Thunderbolt offers an AI workspace where users can interact with AI through chat, search, and research, connect to enterprise data, and choose the models and tools that fit their needs.

It runs natively on Linux, Windows, macOS, iOS, and Android, with a web client also being made available.

A thing to note…

You should know that Thunderbolt ships with telemetry on by default.

According to the project's telemetry documentation on GitHub, it uses PostHog to collect usage data covering chat activity, model selections, settings changes, and location information.

This can be switched off in settings, and the project states no personally identifiable information (PII) is collected without explicit consent.

Who is it for?

The intended audience for this could be organizations with strict data residency or compliance requirements. So think healthcare providers, legal firms, and financial institutions that cannot afford sensitive internal data flowing through third-party AI services.

As for its competition, Thunderbolt is a direct challenge to Microsoft Copilot, ChatGPT Enterprise, and Claude Enterprise. In the open source space, it sits alongside tools like Open WebUI and LibreChat, both of which offer self-hosted AI frontends.

Announcing Thunderbolt, the CEO of MZLA Technologies Corporation, Ryan Sipes, added that:

AI is too important to outsource. With Thunderbolt, we’re giving organizations a sovereign AI client that allows them to decide how AI fits into their workflows – on their infrastructure, with their data, and on their terms.

What can you expect?

this multi-colored (white, yellow, purple, pink) banner shows some screenshots of thunderbolt running on a laptop and smartphone

Thunderbolt connects to frontier models from Anthropic, OpenAI, and Mistral, handles local inference through Ollama, and accepts custom providers, with the workspace offering Chat and Search modes.

It can also handle scheduled work, pulling together briefings, tracking topics over time, or kicking off actions when set conditions are met.

deepset's Haystack integration ties the client into enterprise agent and RAG pipelines within the same architecture, whereas MCP (Model Context Protocol) support is in preview, and ACP (Agent Client Protocol) is in active development with an April 2026 target.

How to get it?

You can get started with Thunderbolt by visiting thunderbolt.io. Organizations interested in enterprise deployment, professional support, or custom development can get in touch with the team.

As for the source code, it lives on GitHub.

Other than that, the FAQ does mention that a Thunderbolt version for regular users is on the cards, but there's no release date for it yet.

Thunderbolt Wants to Do for AI Clients What Thunderbird Did for Email

Mozilla’s New Firefox Mascot ‘Kit’ Triggers Online Backlash Over Pronouns

19. April 2026 um 11:07

It started with one thing. I don’t know why… but somehow, it turned into a debate no one expected.

I could not help taking a walk in Linkin Park ;)

Okay. Back to serious stuff. Weird but serious stuff.

So, last month, Mozilla unveiled the new Firefox mascot, named Kit. That's a cute-looking macot, by the way.

Kit, Mozilla Firefox's new mascot

Mozilla shared a post in their official subreddit. A couple of weeks later, someone noticed the use of'pronouns in that post and all hell broke loose. What was supposed to bring "warmth and familiarity", brought heated arguments and boycott threats.

Mzolla Firefox new mascot announcement

As you can see above, instead of "it", they/them was used in that post, potentially indicating the assignment of non-binary gender identity to them. More on this in the later section of this article.

Hi everyone, if you’ve been poking around our recent updates, you might have noticed a new mascot showing up a little more intentionally. We figured it’s time to introduce them properly. Meet Kit. And before you ask, Kit is neither fox nor red panda, they’re a firefox of course.

The discussion gained traction around April 11, when Brian Lunduke highlighted the pronoun usage and thus bringing the topic into wider debate.

Meet “Kit”. The Firefox Web browser’s new, non-binary mascot.

Yes. You read that right.

Non-binary. With “They/Them” pronouns.

Because, of course @Mozilla had to inject “pronouns”, and “gender identity” politics into a web browser. pic.twitter.com/wPnpCcDfy0

— The Lunduke Journal (@LundukeJournal) April 10, 2026

How the Internet Reacted (Predictably)

As you can imagine, the internet reacted really well, being "very accepting and non-controversial". If you don't believe me, you're right. I've missed April's fool by a huge margin to even consider making that joke.

The “Woke” Debate and Boycott Calls

The outrage was very evident on X (Twitter) and Reddit. Some people immediately jumped ship, posting screenshots of them uninstalling Firefox.

Firefox uninstall over Kit
Firefox uninstall over Kit (2)

Some posted about how it is all a part of the "rainbow" agenda, and how they're trying to infuse politics into something that has no need for it.

Claiming Kit to be "rainbow" agenda

Some people started posting about using the "male lion" browser, Brave.

Bigotry on display, claiming Brave as a "male" browser

The word "woke" was used in almost every other tweet, claiming that it was being spread like a virus.

There was a smaller faction of the critics that brought up a few points of criticism about Firefox's past. The co-founder Brendan Eich had made a significant contribution to a cause in California that sought to ban same-sex marriages (he then proceeded to resign after the outrage, and founded Brave).

Brendan Eich controversy

Other than that, their AI integration, and the CEO's statements in favor of censorship in 2020 were also brought up.

Mozilla CEO's censorship controversy

Counter-Reactions: “Why Does This Even Matter?”

The other side consists more of indifference than support, the most frequent point being that it doesn't matter what pronouns a cartoon fox mascot uses, that the outrage was misplaced and irrelevant to everyday usage of the browser.

Indifference to the gender identity of Kit

The word "snowflake" was thrown around from both the sides, but more to emphasize how a part of the internet was offended by something very harmless.

Snowflakes being thrown from both sides about Kit's identity

While some users spoke of inclusion of a large faction of people, and brownie points for spreading awareness, other users claimed alienation of "half of the population".

Users claiming Kit's gender is alienating users

On a different note completely, a part of the internet went all in with the jokes, my personal favorite as a physics student being that it is only obvious that Firefox is non-binary considering their codename "Quantum", in which a particle exists in two states at the same time.

"Quantum" joke about Kit's identity

It goes without saying that there were plenty of "what does the fox say?" jokes.

Memes about Kit's gender claim

The Reality: Is Kit Even Non-Binary?

The important point to remember through all of this, however, is that Mozilla did not really claim that Kit was non-binary, but only referred to Kit with "they" later in the article.

The article never claimed Kit was non-binary

You don't have to rely on a tweet for the reference. If you read the branding guideline from Mozilla, it clearly mentions this about Kit:

Kit (he/she/they/them/it) is the user’s constant companion. Wherever they choose to roam, Kit will accompany and guide them with clever, playful encouragement and support — giving the user the confidence to run free.

Basically, Kit has no gender. Or, should I say it has whatever gender you prefer. Perhaps the person who posted from Mozilla's official account prefered the 'they/them' pronoun? Personally, I would prefer calling it "it" because it rhymes with "kit".

Final Thoughts: Much Ado About Nothing?

So what’s the takeaway?

A mascot meant to feel friendly ended up triggering a familiar internet cycle:-interpretation, outrage, and counter-outrage.

Whether you see it as inclusion, overreach, or simply irrelevant, one thing is clear; even a cartoon fox isn’t safe from becoming a debate.

What do you think? Overreaction or valid concern?

Mozilla’s New Firefox Mascot ‘Kit’ Triggers Online Backlash Over Pronouns

Won’t Somebody Think of the Children? Why Big Tech’s ‘Tobacco Moment’ Isn’t What It Seems

19. April 2026 um 09:27

In Los Angeles this March, a jury did something US courts have long refused to do: it treated the feed itself as the harm. It felt like vindication, victory even, to those of us who are critical of big tech's outsized influence on every aspect of our lives. But there is need for cautious optimism, caution even, instead of celebration.

Jurors found Meta and Google negligent for the way Instagram and YouTube are designed; not for any particular piece of content (the 20‑year‑old plaintiff, identified as Kaley/KGM), happened to see on them. They awarded her $6 million in compensatory and punitive damages and explicitly described these platforms as deliberately addictive “machines” that harmed her mental health.

This is more than a sympathetic jury and a moving story. It is the first time a US jury has effectively treated major social platforms as defective consumer products whose design – infinite scroll, notifications, algorithmic recommendations – can be a “substantial factor” in harming young users. In doing so, the case skirted the traditional shield of Section 230 by focusing not on user‑generated content, but on product design and failure to warn.

For critics of big tech, and I am one of them, that sounds like justice delayed finally arriving. I was happy.

Briefly.

But if we are not careful, the legal and policy response to this big tobacco moment will harden the already rapidly enshitified internet we already have: centralized, identity‑hungry, and surveillance driven. These are precisely the conditions that made these products so powerful in the first place.

From Bad Content to Bad Machines

For nearly three decades, legal debates about platforms have orbited around content: who is responsible for extremist propaganda, self‑harm photos, misinformation. Section 230 in the US enshrined the idea that platforms are not publishers of third‑party speech. Even when courts and regulators pushed, they pushed on content moderation, not on the underlying machine.

The Kaley verdict is a reorientation of this conversation. Jurors heard company documents and expert testimony describing Instagram and YouTube as addiction machines” designed to maximize engagement, time‑on‑site and data extraction from children who were never supposed to be there in the first place.

They found negligence not only in failing to keep under‑13s off the platforms, but in failing to warn about the risks of the core design itself.

This shift from “we hosted bad content” to “we built a dangerous machine” matters. It opens the door to product‑liability style reasoning that could travel, in principle, to other design patterns: streaks, loot-boxes, recommendation systems, dark patterns in on-boarding. It also resonates with developments outside the US, where the EU’s Digital Services Act is already scrutinizing addictive design at the level of interface and recommender algorithms. Earlier this year, the European Commission issued preliminary findings that TikTok’s reliance on infinite scroll and weak “screen time breaks” breaches its duty to mitigate addictive design risks under the DSA, and told the company to change “the basic design of its service”.

But if the machine is on trial, the question becomes: what kind of machine do we build next?

“Addiction” as Legal Story and Medical Dispute

In both law and media, the Kaley verdict has been framed as proof that social media is simply addictive and toxic to teens. The courtroom narrative is clean: a straight line can be drawn: a vulnerable child to the manipulative machine.

The scientific picture is messier.

On one side, the 2026 World Happiness Report carries a chapter by Jonathan Haidt and Zachary Rausch arguing that there is now “overwhelming evidence” that social media is harming adolescents at a scale large enough to shift population‑level mental health, drawing on seven lines of evidence ranging from cross‑sectional studies to natural experiments. The authors argue that ordinary use – often five or more hours a day – functions as a product safety failure, especially for girls.

We further argue that when these lines of evidence are considered alongside the timing, scope, and cross-national trends in adolescent well-being and mental health, they can help answer a second question: was the rapid adoption of always-available social media by adolescents in the early 2010s a substantial contributor to the population-level increases in mental illness that emerged by the mid 2010s in many Western nations? We call this the “historical trends question”. We draw on our findings about the vast scale of harm uncovered while answering the product safety question to argue that the answer to the historical trends question is “yes”.

On the other, another chapter in the same report, by Helliwell and colleagues, emphasizes that the relationship between youth well-being and internet use is more nuanced: some types of online activity (communication, learning, content creation) correlate with higher life satisfaction, while heavy social media and gaming correlate with lower well-being, particularly at extreme usage levels and in English‑speaking countries. They caution that youth well-being trends cannot be reduced to a single cause.

In other words: there is strong evidence of risk and harm, but causality, dose, and mechanism are still contested.

Safety as a Pretext for More Surveillance

Politicians around the world have not waited for the science to settle. They have moved quickly to do something about youth and social media – and the measures they are choosing tell us a lot about the political economy of the internet they are entrenching.

In Australia, world‑first social media age restrictions now require major platforms – Facebook, Instagram, TikTok, X, YouTube, Snapchat, Threads, Reddit, Kick, Twitch – to take “reasonable steps” to prevent under‑16s from having accounts, backed by fines of up to A$49.5 million for non‑compliance.

In practice, they are expected to deploy multiple age assurance technologies: ID checks, facial or voice analysis, behavioral age inference.

Children and parents themselves are not fined; the pressure is entirely on platforms to ramp up identity and behavioral surveillance in order to demonstrate diligence.

In the US, California’s Digital Age Assurance Act pushes the same logic down into the operating system itself. From January 2027, OS vendors are required to collect an age or age bracket at account setup and expose it via an API so that app stores and online services can query a system‑level age signal.

The law is written broadly enough that free and open‑source operating systems – Debian, Fedora, BSDs, Pop!_OS – are, on paper, on the hook alongside Apple and Microsoft.

System76’s CEO, writing about this wave of laws in Colorado and California, warns that the effect is to turn OS vendors into identity brokers and gatekeepers, and to “encourage children to lie” about their age for fear of being confined to a “nerfed internet”

Layer these developments on top of each other and a pattern emerges: won't somebody please think of the children?

We've heard this moral argument before: with video games, heavy metal, rap. What happens next is history rhyming:

  • pushing age‑verification and age‑bracketing ever deeper into the stack – from app sign-up forms, to OS APIs, to network‑level checks;
  • incentivising large platforms and OS vendors to collect, infer, and share more information about who we are and how old we are;
  • creating compliance burdens that small, de-centralized, or non‑profit projects can barely navigate, effectively nudging regulators and industry towards a small club of compliant, centralized providers.

Safety becomes the moral language through which a more identity‑locked, surveilled, and centralized internet is made to feel inevitable.

Regulators Discover “Addictive Design” – But For Whom?

The EU’s preliminary findings on TikTok’s addictive design under the DSA are a good example of this ambivalence. On one level, it is encouraging to see regulators finally target infinite scroll, frictionless autoplay, and weak screen time nudges as systemic risks requiring product changes, not simply more content moderation. The Commission is, at least in principle, saying: design patterns that exploit compulsive behavior and harm children can be unlawful. This is a good start. Unfortunately that's where the good news ends.

Notice who is legible to this kind of regulation. The DSA presumes large, centralized platforms with access to vast behavioral data, capable of implementing complex risk‑assessment and age‑assurance regimes. The Australian and Californian laws do the same.

A federated social network run by a school, a youth center, or a community collective cannot cheaply plug into this machinery. A small FOSS OS project has neither the lawyers nor the telemetry to play at this table.

The risk is that addiction design becomes another compliance rubric that only the biggest players can afford to satisfy, while everyone else is either chilled out of existence or forced to rely on the same proprietary identity infrastructure.

The Missing Imagination: Community‑Run, Free and Open Alternatives

The saddest thing about this moment is how narrow the mainstream imagination of alternatives remains. The policy menu is filled with bans, curfews, and ID checks for the same extractive platforms. There is little serious talk of changing the infrastructure.

Yet we know from both history and present practice that other models are possible. Schools and libraries have run moderated online communities for decades. Federated platforms like Mastodon and Matrix, for all their flaws, show that it is possible to have social networks that are not controlled by a single profit‑maximizing entity. Community‑run game servers, forums, and fan communities have long been youth‑driven spaces with their own norms of care and accountability. My first years on the internet, circa 2001-2003, was spent in such forums. Social media trampled such online communities during their first decade.

A genuinely emancipatory response to the Kaley verdict would start from a different question: given that these products have now been recognized, in court, as dangerous by design, how do we:

  • treat them like other dangerous consumer products – with warnings, design constraints, and liability – without making bio-metric and behavioral surveillance the price of entry to the digital world;
  • redirect public money, regulation, and cultural attention towards building non‑exploitative, commons‑based digital spaces for young people;
  • lower the barriers for schools, municipalities, youth groups, and co‑ops to run their own FOSS‑based platforms, with public funding and legal safe harbors, rather than locking them into corporate clouds that must, by their nature, maximize engagement.

This is where free and open source software is not just a licensing detail but a political stance. An internet where young people’s social lives unfold on community‑run, auditable, forkable software – hosted by institutions that have a duty of care, not a duty to shareholders – is not a Utopian fantasy. It is not merely a design choice.

It is a political choice.

Builders, Regulators, and the Rest of Us

For those who build technology, the Kaley verdict is a warning shot: engagement is no longer a neutral metric. If a design pattern is optimized to keep a 10‑year‑old scrolling past bedtime, courts may increasingly treat that as a defect, not an achievement. Engineers, designers, and product managers now have to think like people who might one day be cross‑examined about why they shipped this infinite scroll, this notification scheme, this recommender.

For regulators, the temptation will be to double down on what already feels familiar: more age gates, more identity checks, more compliance dashboards for big platforms and OS vendors. It is politically safer to demand better seat-belts from the existing car companies than to fund buses, bike lanes, or public trains. But if all we do is wrap the same addictive machines in ever tighter rings of surveillance and control, we will have saved some children from some harms at the price of deepening structural dependence on the very firms whose incentives created the crisis.

The LA jury has told us, in the blunt language of damages and negligence, that the machine is the problem. The real task now is to ensure that the fix is not simply a more paternalistic, more identity‑hungry version of the same machine, but an opening for something else: community‑run, free and open infrastructures where young people can be online without being harvested.

That is a harder story to tell in a courtroom. But it is the story the rest of us – parents, educators, coders, writers, legislators – will have to write.

Won’t Somebody Think of the Children? Why Big Tech’s ‘Tobacco Moment’ Isn’t What It Seems

Ältere BeiträgeIt's FOSS

21-year-old Polish Woman Fixed a 20-year-old Linux Bug!

17. April 2026 um 15:22

Okay, not a Linux bug in the kernel, but one that has existed in the Enlightenment window manager E16 since 2006, when Kamila Szewczyk was barely a year old.

Kamila, now a 21-year-old graduate student at Saarland University in Germany, daily drives a window manager that predates most of her classmates. That alone is a fun fact.

But what makes it remarkable is that she didn't just use it, she dug into its decades-old codebase, found a bug that had been hiding there since 2006, and fixed it.

What is Enlightenment E16, again?

Kamila's Enlightenment E16 desktop

For the uninitiated, Enlightenment is a window manager for Linux, the software responsible for drawing and managing the windows on your screen. It first appeared in 1997, making it older than a significant portion of today's Linux user base. E16, the version Kamila uses, arrived in 1999 and quickly gained a reputation for being highly customizable and visually impressive, at a time when most Linux desktops were far more utilitarian.

Enlightenment is not as well known as KDE or GNOME, and even LXDE has broader name recognition today. But it has a small, dedicated following and can be found in niche distributions like Pentoo or Bodhi Linux. Bodhi actually uses Moksha, a fork of Enlightenment, as its default desktop.

Over time, the Enlightenment team began a complete rewrite of the project using a new modular framework called EFL (Enlightenment Foundation Libraries). That rewrite took over a decade and eventually became E17, released in December 2012. E17 evolved from a simple window manager into a full desktop shell with modern compositing and improved hardware support.

But not everyone followed. A portion of the community stuck with E16, continuing to maintain and develop it independently. It reached the 1.0 milestone and, as of 2024, the latest release is version 1.0.30. It is very much alive, just quietly so.

Kamila is part of that quiet community.

The accidental bug discovery

She wasn't hunting for bugs. She was doing something mundane; preparing lecture slides for a course she teaches as a graduate student. She had a couple of PDFs typeset in LaTeX, opened one of them in Atril, a document viewer, and her entire desktop froze.

It wasn't a one-off glitch. The freeze was reproducible, which is both frustrating and, for a developer, oddly exciting. A reproducible bug is a bug you can actually chase down. So she did.

After digging through the codebase, Kamila traced the freeze back to the way E16 handled overly long file names.

When a window title was too long and needed to be truncated, the algorithm responsible for doing so had no iteration limit. So it would spin indefinitely, locking up the desktop entirely. The bug had been sitting there, dormant, since 2006, waiting probably for exactly the right set of circumstances to surface.

She patched it and the fix is available on her blog. I hope she made a pull request to the original codebase as well.

Why this story matters

On the surface, this is a niche story about an obscure window manager that most Linux users have never touched. But look a little closer and it is something more than that.

Kamila was born in 2004. The bug she fixed was already two years old by then. She grew up, went to university, became a graduate student and a teacher and the bug just sat there, in a codebase maintained by a handful of enthusiasts, waiting. It took someone who actually uses E16 as a daily driver to finally stumble onto it and care enough to fix it.

That is the true open source spirit. Not a big company, not a bounty program, not a CVE filing. Just a person, their computer, a frozen desktop, and the curiosity to figure out why.

There are people who have been maintaining this codebase for decades. There are people who still use it. And every now and then, one of those users catches something no one else did and quietly makes the software a little better before moving on with their day.

That's not a small thing. That's the whole point.

Source: The Register

21-year-old Polish Woman Fixed a 20-year-old Linux Bug!

Cal.com Goes Close Source Because "AI Can Easily Exploit Open Source Software"

17. April 2026 um 13:33

AI has been a mixed bag for the open source world. Some developers are using it to write faster, catch bugs, and review patches more efficiently. Others are watching the same tools get turned against the codebases they maintain.

Cal.com, a popular open source scheduling platform and one of the more well-known self-hostable alternatives to Calendly, has found itself in the second camp. After five years as an open source project, the company has announced that it is switching to a closed-source model, citing the growing threat of AI-powered vulnerability scanning.

What happened?

The co-founder of Cal.com, Bailey Pumfleet, has addressed why they went down this path, saying that AI has changed what it takes to exploit an application. Earlier, finding vulnerabilities meant real expertise and some serious time investment.

But today, an AI model can be directed towards a public repo and do the same job systematically without needing much manual labor.

He also cited a specific case to back this up, where AI tooling reportedly found a 27-year-old vulnerability in the BSD kernel and had working exploits ready within hours.

📋
I think Bailey has misattributed the above occurence, as the 27-year-old bug was found in OpenBSD, thanks to Claude Mythos, and has since been patched.

But, yeah, closed source it is. 😅

Another thing worth knowing is that the production codebase had already been drifting away from what was publicly available. Core systems like authentication and data handling had both gone through significant rewrites, making the public repo and what actually runs in production two fairly different things by the time this announcement came.

Does it make sense?

Cal.com isn't wrong that AI can be used to hunt for vulnerabilities in open source code. That's documented and real. But the provided argument treats AI purely as an attacker's tool, which is a selective reading of the situation.

Take the Linux kernel, for example. We recently covered how Greg Kroah-Hartman, the Linux stable kernel maintainer, has been running what looks like AI-assisted fuzzing on the kernel through a branch he calls "clanker," using it to identify bugs and patch them proactively.

There's even an official policy in place that governs the use of such AI tools for contributions.

Then there's the older argument that closing your source doesn't actually make you more secure. It just means fewer eyes on the code. Open source projects benefit from anyone, anywhere, being able to spot and report problems.

Heartbleed and Log4Shell were both found by external researchers precisely because the code was auditable. This just shows us that a private codebase doesn't prevent vulnerabilities; it just reduces the chances of catching them before someone with bad intentions does.

What's next?

For self-hosters and developers, Cal.diy is what's on offer. It's available now under the MIT license, with the documentation covering installation via Docker, Vercel, Railway, Render, and a handful of other platforms.

The project is described as "strictly recommended for personal, non-production use," with a "use at your own risk" disclaimer throughout. It is community-maintained, with no official backing from Cal.com.

Feature-wise, Cal.diy covers the personal scheduling essentials like event types, calendar integrations, video conferencing, webhooks, and API access.

But a fair bit is missing. Teams, Organizations, SAML SSO, SCIM directory sync, Workflows, Routing Forms, and the Insights Dashboard are all absent from the community edition.

If you're running Cal.com for anything commercial, the Cal.diy documentation steers you back to the paid product pretty explicitly, saying that "for any commercial and enterprise-ready scheduling infrastructure, use Cal.com."

All of that made me wonder, whether AI was the catalyst or the perfect scapegoat for a closed-source transition. Anyway, I like yapping like this every so often; don't mind me.

Cal.com Goes Close Source Because "AI Can Easily Exploit Open Source Software"

Russian Baikal CPUs Are Losing Their Place in the Linux Kernel

17. April 2026 um 09:21

Support for Russian Baikal CPUs is being pulled from the Linux kernel. Work has begun in the Linux 7.1 cycle to remove driver code and device tree bindings for Baikal SoC hardware, with more patches already lined up to follow.

The first removal came with the ATA pull for Linux 7.1-rc1, merged by Linus Torvalds on April 15. It dropped the Baikal bt1-ahci DT binding and stripped Baikal-specific code from the ahci_dwc driver, with the ATA maintainer, Niklas Cassel, noting that upstreaming for the SoC "is not going to be finalized."

this picture shows the linux kernel archive mirror with baikal as the searched term and a list of changes related to it shown below in a numbered list
You can browse the LKML for tracking Baikal's removal.

Furthermore, the code had been sitting unmaintained for some time now. Serge Semin, who contributed the bulk of Baikal's kernel support over the years, was among roughly a dozen Russian developers removed from the kernel MAINTAINERS file in 2024.

With no one left to maintain it and the hardware itself rare even within Russia, there appears to be no rationale for keeping the code around.

Some background info

The Baikal line of CPUs is the work of Baikal Electronics, which was founded in January 2012 as a spinoff of T-Platforms, a Russian supercomputer company.

It started with a MIPS-based chip for embedded applications, then pivoted to ARM for its later processors, all manufactured at TSMC. The plan was to supply Russian state-owned enterprises with domestically produced CPUs as an alternative to Intel and AMD.

But Russia's 2022 invasion of Ukraine ended that. Sanctions cut off TSMC access, 150,000 Baikal-M units already manufactured were seized in Taiwan, and ARM production licenses were lost. The company filed for bankruptcy in August 2023.

It did not stay down. By the end of 2024, Baikal had shipped a total of 85,000 processors since its founding and began serial production of the Baikal-U1000, a RISC-V microcontroller, in September 2025 (in Russian).

The current lineup consists of the Baikal-T (MIPS), Baikal-M and Baikal-S (ARM), and the Baikal-U (RISC-V).

Those already running Linux on Baikal hardware will need to stay on Linux 6.18 LTS or earlier, as newer kernel versions are dropping the support.


Suggested Read 📖: The Linux Kernel is Finally Letting Go of i486 CPU Support

Russian Baikal CPUs Are Losing Their Place in the Linux Kernel

Privacy Email Service Tuta Now Also Has Cloud Storage with Quantum-Resistant Encryption

16. April 2026 um 20:03

Privacy in 2026 is a bit of a joke. Governments have turned surveillance into standard operating procedure, and Big Tech companies treat your personal data like a free-for-all buffet, helping themselves, then selling the leftovers to data brokers who do the same.

That's pushed people toward privacy-first alternatives, and quite a few companies have stepped up to meet that demand. Tuta is one of the more recognizable names in that space, offering encrypted mail and calendar services to over 10 million users worldwide.

Now, the company is looking to round out its ecosystem with the one piece that's been missing, an encrypted cloud storage solution.

A haven for your files?

Tuta first laid the groundwork for this back in July 2023, when it announced the PQDrive project with backing from the German government. The initiative had received €1.5 million in funding through the KMU-innovativ program, a grant scheme that supports small and medium enterprises in research and development.

The goal was clear from the very beginning. It was to build a cloud storage service secured with post-quantum encryption, not just conventional algorithms.

To get there, Tuta partnered with the University of Wuppertal, which handled key research tasks including testing cryptographic algorithms and figuring out how to deduplicate encrypted data without punching holes in the security model.

All that effort has now produced a product ready for real-world testing. Starting today, Tuta Drive enters closed beta, with select users receiving early access to put it through its paces ahead of a public release.

It is an end-to-end encrypted cloud storage service that fits directly into Tuta's existing ecosystem alongside mail and calendar. Everything you store gets encrypted without any action needed on your end, and the zero-knowledge architecture means Tuta has no technical ability to read your files or share them with anyone else.

The encryption underpinning Drive is the same TutaCrypt protocol Tuta already uses for its mail service. It combines classical and quantum-resistant algorithms in a hybrid approach, so even if a quantum computer cracks one layer down the line, it still has to contend with the other.

And, the service is hosted in Germany, which brings strict GDPR protections into play on top of the technical safeguards.

Arne Möhle, CEO of Tuta, announced this by commenting that:

With Tuta Drive, we are taking the next step towards offering a full private digital workspace.

Today, more than ten million citizens and businesses, including journalists, whistleblowers and activists use Tuta Mail as an alternative to insecure email offered by mainstream providers.

Adding an encrypted cloud storage to Tuta will enable them to also store their files securely.

Test run

We were given early access to the closed beta ahead of its rollout today, and here's a look at what Tuta Drive is like right now.

The interface is minimal, which is fine. You get a familiar sidebar and a top bar that shows you the server connection status and houses quick-switch buttons for Mail, Contacts, Calendar, and Drive.

Uploading new files on Tuta Drive.

First, I uploaded two videos to see how Tuta Drive would handle them. Here, the upload speeds were noticeably slow when connected over a VPN, though that's more or less expected. Without an active VPN connection, file uploads were fast.

Tuta Drive makes it easy to move any uploaded files.

Moving those files to a new folder afterward was straightforward using the "Move" option from the right-click context menu. Drag and drop works too, and I could manually select specific files without any issues. Cut and paste for moving files around also worked well.

When uploading multiple files at once, a progress list appears, which is handy. The one catch is that you can't scroll through it to check which file is currently being processed, which was a bummer.

screenshot of tuta drive closed beta showing a long upload progress list on the right

Files are shown with appropriate icons depending on type, so images, videos, and audio all get their own visual treatment. Folders display a cat emoji where the folder size info should probably appear, which looks like a work-in-progress placeholder more than anything else.

many different file types are shown in this screenshot of tuta drive closed beta

If you upload something by mistake or decide a file isn't worth keeping, you can delete it promptly either from the right-click context menu or by hitting Delete on your keyboard. The "Trash" page then gives you the choice to either restore it if it was a wrong call or permanently delete it if you're sure.

Deleting files from Tuta Drive.

That said, folder uploads aren't supported yet, and the keyboard shortcut support is lacking. Ctrl+A to select everything in a folder, for instance, does nothing. No search tool either; those are the kinds of gaps that user feedback tends to sort out quickly.

Seeing that this is a closed beta, I am confident that the Tuta folks will listen to what people say about their newest offering and act accordingly.


💬 Would you give Tuta Drive a shot, or are you too committed to Proton Drive or other cloud solutions to even look its way?

Privacy Email Service Tuta Now Also Has Cloud Storage with Quantum-Resistant Encryption

FOSS Weekly #26.16: Kernel 7.0, Essential Terminal Tips, France Linux Move, New Age Verification Bill and More

16. April 2026 um 16:00

The big new, and it’s good, is coming from France. The government’s digital agency DINUM is moving its workstations from Windows to Linux, with every French ministry required to submit a plan by Autumn 2026 to reduce dependence on non-European software.

Another major update, and not a pleasant one, is coming from the United States. A federal bill is now being discussed that proposes OS-level age verification. Until now, this was limited to a handful of states, but this could expand it nationwide.

Two very different directions. Both worth paying attention to.

Here are other highlights of this edition of FOSS Weekly:

  • A new Linux kernel release.
  • France replacing Windows with Linux.
  • Microsoft locking out open source developers.
  • And other Linux news, tips, and, of course, memes!
  • This edition of FOSS Weekly is supported by Aiven.

Aiven just launched a permanent free tier for OpenSearch, offering a fully managed, persistent playground for your projects. With 4GB RAM and 20GB storage, it’s specifically engineered for the memory-heavy demands of AI: support for k-NN indexing, vector search, and RAG pipelines.

No credit card required and no trial limits. What else can you ask for?

Deploy your cluster today

📰 Linux and Open Source News

VeraCrypt, WireGuard, and Windscribe all had their Windows Hardware Program developer accounts suspended, cutting off their ability to ship signed driver updates for Windows.

Two related kernel AI stories this week. First, Linux has shipped an official AI coding assistants policy where AI help is allowed, but every patch needs a human accountable for it. Second, Greg Kroah-Hartman has been running what looks like an AI-assisted fuzzer on the kernel in a branch he calls "clanker."

A Valve contractor has put together a fix for the VRAM mismanagement problem that's been hitting Linux gamers on AMD GPUs with 8GB or less.

A bug report filed in 2005 asking for per-screen virtual desktops in KDE has finally been addressed. The feature lets each monitor show a different virtual desktop independently rather than all switching together.

Linux 7.0 landed this week with a wide spread of improvements. Intel gets Nova Lake audio and better Arc GPU temperature reporting. AMD gets early Zen 6 performance profiling support and GPU groundwork for future hardware.

🧠 What We’re Thinking About

Session has lost all its paid developers and is running on volunteers. Donations are keeping the infrastructure alive until July 8, but development is effectively frozen unless they reach their $1 million donation goal.

🧮 Linux Tips, Tutorials, and Learnings

Not everyone is a command line fan, but if you do spend some time in the terminal, these tips and shortcuts will save you plenty of time and make you more efficient.

And if you are absolutely new to Linux, it helps to start with the basics first. Not commands, but the kind of foundational things that make your early terminal experience far less confusing.

Moving from basics to everyday usability, we now have a beginner-friendly guide to taking screenshots in Linux Mint. It covers the built-in GUI tool, keyboard shortcuts, and even how to set up custom delayed screenshots.

Once you get comfortable with the essentials, you might start exploring distributions more deeply. But not all rolling release distros are made equal. Arch gives you everything and expects you to handle it. Manjaro smooths the edges. Void is independent and leans stable. Gentoo compiles everything. Which one would you go for?

And somewhere along that journey, you’ll inevitably hit the classic fork in the road: Vim or nano. Nano works exactly like you'd expect a text editor to work, with controls visible on screen. Vim, on the other hand, runs on modes, muscle memory, and a learning curve that takes real commitment.

📚 Linux eBook bundle (ending this week)

No Starch Press needs no introduction. They have published some of the best books on Linux. And they are running an ebook bundle deal on Humble Bundle.

I highly recommend checking it out and getting the bundle.

Plus, part of your purchase supports Electronic Frontier Foundation (EFF).

👷 AI, Homelab and Hardware Corner

At some point every homelab stops being manageable by memory alone. Our roundup of dashboard tools is the answer to that.

Tired of AI fluff and misinformation in your Google feed? Get real, trusted Linux content. Add It’s FOSS as your preferred source and see our reliable Linux and open-source stories highlighted in your Discover feed and search results.

Add It's FOSS as preferred source on Google (if you use it)

✨ Apps and Projects Highlights

Yantr is a self-hosted app store for your homelab that runs as a single Docker container on top of whatever OS you're already using.

📽️ Videos for You

Fedora 44 got delayed, but you can check out what's new!

💡 Quick Handy Tip

Firefox has a native color picker called Eyedropper that helps you know the exact hex color code of a specific color on a webpage. It is available inside Menu -> More Tools -> Eyedropper.

firefox eyedropper tool

You can also right-click on an empty place in the toolbar and select "Customize Toolbar..."

Here, drag and drop the "Developer" tool to the toolbar. Now, you can access the Eyedropper from this button as well.

🎋 Fun in the FOSSverse

A new fun quiz where you have to guess the fake distros that do not exist.

Oops, let me hide my pile of trash. 🫠

messy home directory linux meme

🗓️ Tech Trivia: On April 16, 1959, John McCarthy gave the first public presentation of LISP at MIT. The list-processing language he built from scratch became the foundation of artificial intelligence programming and introduced concepts like garbage collection still used today.

🧑‍🤝‍🧑 From the Community: One of our regular FOSSers has posted about Hardware Freedom Day 2026; are you celebrating?

Can You Identify The Fake Linux Distros From The Real Ones?

16. April 2026 um 15:26

Not all distros are created equal.

In fact, not all distros are created at all.

This quiz is simple. You'll be presented with a few Linux distros and their details. The twist is that they might not be a real thing. They could just be a fragment of my imagination.

Of course, this is valid only at the time when I created this quiz. The way we move in Linux world, there could be some new distros coming up right after I publish this quiz 😃

🚧
Some browsers block the JavaScript-based quiz units. Disable your ad blocker to enjoy the quizzes and puzzles.

Oh No! Now A Federal Bill Wants OS-Level Age Verification for Everyone in the USA

16. April 2026 um 14:11

The U.S. has been quietly building up a set of state-level laws that push operating system providers into the age verification plague.

California's AB 1043, signed in October 2025, requires OS providers to collect age data at account setup and pipe it to apps through a real-time API. It kicks in on January 1, 2027.

Colorado is working on something nearly identical. SB26-051 (which we covered when it was still a proposal) passed the state Senate 28-7 on March 3, 2026, and is now waiting on a House vote to become law there too.

However, these are just state-level laws. A new federal bill, H.R.8250, introduced on April 13, 2026, by Rep. Josh Gottheimer, with Rep. Elise M. Stefanik signing on as cosponsor, has us intrigued.

a cropped screenshot of the congress.gov website that shows the proposed h.r.8250 bill

The official title of the bill reads, "To require operating system providers to verify the age of any user of an operating system, and for other purposes." But that's a mouthful; the short version is "Parents Decide Act."

If you go by the full title, the bill is pretty self-explanatory; it is going to require every operating system provider to verify the age of its user who wants to use their OS, and vaguely enough, for any "other purposes."

It has been referred to the House Committee on Energy and Commerce and currently sits at step one (Introduced) of five in the legislative process. No bill text has been published; there's no summary, no subject tags, and no related bills attached to it.

That means right now, the only thing formally known about H.R.8250 is its title, its sponsors, and where it got sent.

But wait, do you… 👇

Want more details?

this cropped screenshot shows a blog titled, "release: gottheimer announces bipartisan "parents decide act" to protect kids online."

Gottheimer's office published a press release on April 2, 2026, announcing the bill 11 days before it was formally introduced. That press release was unavailable for a while, but it is now back up.

According to the announcement, the bill would require OS developers to verify user age at device setup, allow parents to set content controls right there, and have those settings flow through to apps and platforms on the device.

Apple and Google were the companies Gottheimer named as the intended targets, with the framing centered entirely around phones and tablets.

But here's where it gets interesting for anyone outside the Apple and Google ecosystem. Gottheimer's press release framed this entirely around commercial mobile platforms. The official bill title, as you saw earlier, does not.

If the bill text matches the breadth of that title, Linux distributions and other open source operating platforms would sit squarely within its scope. And a federal bill passing would mean one nationwide compliance requirement replacing the current state-by-state situation.

The representative also voiced support for several groups, which include the likes of:

Evidently, things are getting more absurd with each passing day, and I can't wait for the day when access to anything electronic is locked behind a gate, guarded by the most decent and righteous upholders of the law. /s


💬 If you are looking for a conversation surrounding this, our forum is the place to be!

A PHP Dev Just Solved a 20+ Year-Old KDE Plasma Problem No One Else Would

15. April 2026 um 11:18

Back in 2005, a bug report was filed by Kjetil Kjernsmo, then running KDE 3.3.2 on Debian Stable. He wanted the ability to have each connected screen show a different virtual desktop independently, rather than having all displays switch as one unit.

Over the years, over 15 duplicate reports piled onto the original as more people ran into the same wall. And that's not a surprise, because multi-monitor setups have become increasingly common.

The technical reason why this issue stayed open this long comes down to X11. Implementing it there would have required violating the EWMH specification, which has no concept of multiple virtual desktops being active at the same time.

The KWin maintainer Martin Flöser had said as much in 2013, effectively ruling it out for the entire KDE 4.x series. The only realistic path was through Wayland, and that path needed someone willing to actually walk it.

Someone finally did. The feature has now landed in KWin's master branch and is set for a Plasma 6.7 introduction.

How was this accomplished?

Video courtesy of Hynek Schlindenbuch.

The merge request was opened by Hynek Schlindenbuch, a developer with no prior KDE contributions.

Each screen now independently tracks which virtual desktop it is showing. Any desktop can appear on any screen, and the same one can be shown on multiple screens at once. Windows belong to a specific screen, even if they visually span two, and can be assigned to one or more virtual desktops.

A window stays visible when its screen is showing one of those desktops. Keyboard shortcuts only switch the desktop on the currently active screen, not across all of them at once.

Unlike Hyprland, switching to a desktop does not pull focus to that desktop's screen. Hynek made that choice deliberately.

VirtualDesktopManager tracks the current desktop separately for each output, and switching all screens together remains the default, with per-output switching available as an opt-in via settings.

Keep in mind that this fix is Wayland only. X11 was left out intentionally since it relies on the EWMH protocol, and with X11 support being dropped in Plasma 6.8 anyway, that is a less significant shortcoming than it sounds.

If you were curious about Hynek, he is a full-time PHP programmer with over six years of experience. His C++ background going into this project was minimal, and he had no experience with Qt or CMake and had only set up KDE Plasma on an old laptop a few months before opening the merge request.

The motivation for this was his plan to move to Wayland for fractional scaling support, but the missing per-screen desktop functionality was blocking his switch to Plasma.

See how a lone open source developer's initiative changes things for the rest of us? 🙃

An Open Source Dev Has Put Together a Fix for AMD GPU's VRAM Mismanagement on Linux

13. April 2026 um 16:24

Natalie Vock (pixelcluster), a developer who works on low-level Linux code and as an independent contractor for Valve, has published a fix for a VRAM management problem that has been making life difficult for Linux gamers on AMD GPUs with 8GB of VRAM or less.

She has put together a combination of kernel patches and userspace utilities that stop background apps from stealing VRAM away from whatever game you're playing.

The underlying issue is that when VRAM runs out, the kernel driver has no way to tell which memory matters more. A game and a browser tab look identical from the driver's perspective, so when something has to give, game memory often takes the hit.

It then ends up in GTT, a chunk of system RAM that the GPU can access, but over the PCIe bus rather than directly.

The fix is built on the dmem cgroup controller that she co-developed with Maarten Lankhorst from Intel and Maxime Ripard from Red Hat. It is already in the mainline Linux kernel, and it lets the driver treat foreground apps as higher priority when handing out VRAM.

That alone was not enough, though. Natalie has also written six kernel patches to fix a specific gap where VRAM pressure would cause new memory allocations to skip those protections entirely and end up in GTT anyway.

Two userspace utilities handle the rest: dmemcg-booster sets up the groundwork so the kernel protections actually activate, and a fork of KDE Plasma's Foreground Booster keeps track of which app is in the foreground so it gets first dibs on VRAM.

What this means for Linux gamers

Instead of performance slowly degrading over a session, games should now hold steady for as long as their own VRAM usage stays within budget. Natalie notes that most modern titles tend to stay within 8GB, so owners of 8GB GPUs should be in a much better spot with today's games.

While this applies to any GPU running the amdgpu driver, Intel GPUs on the xe driver have the necessary kernel support too, though real-world testing there is still pending.

Additionally, the developer has submitted a patch for nouveau, the open source NVIDIA driver.

How to get it

🚧
The developer warns that things could break if you install the patches. Proceed with caution on general use of production machines.

The six kernel patches are not in the mainline kernel, so getting them requires some extra steps depending on your setup. CachyOS users on Linux 7.0rc7-2 or later are already covered.

On other Arch-based distros, both utilities are in the AUR. For the kernel side, you can either pull the CachyOS kernel package from the repository or install linux-dmemcg from the AUR, which compiles Natalie's development branch.

The six patch files are also linked directly in the announcement blog for anyone who wants to apply them to a custom kernel build.

For those not on an Arch-based system, the realistic options are applying the patches manually to a self-compiled kernel or waiting for your distro to pick them up. Natalie has said her post will be updated if and when the work gets packaged by other distributions.


Suggested Read 📖: The Linux 7.0 Release is Here!

AI Code Gets Approved in the Linux Kernel… But With Strings Attached

13. April 2026 um 13:39

The Linux kernel project has spent quite some time navigating the use of AI tools, and the response usually has been somewhere between "figure it out yourself" and "we'll get back to you."

Late last year, at the 2025 Maintainers Summit, Sasha Levin pushed for some documented consensus, and what came out of it was human accountability for patches being non-negotiable, purely machine-generated submissions not being welcome, and tool use being disclosed.

He promised to put something in writing without committing to enforce it, and that work has now shipped with Linux 7.0.

What is it?

Linux's AI coding assistants policy.

The new document is called AI Coding Assistants and lives in the kernel's process docs alongside the rest of the contribution guidelines. The short version is that AI-assisted contributions still need to comply with GPL-2.0-only; AI agents cannot add Signed-off-by tags; and patches that had AI help should carry an "Assisted-by" tag.

The Developer Certificate of Origin (DCO) is an important aspect that exists, so there is a human accountable for every patch. AI assistance does not change that hard requirement.

Basically, the human submitter reviews everything the AI produced, confirms it meets licensing requirements, and puts their own name on it with an appropriate mention that AI was used.

The Assisted-by tag format is Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]; for scenarios where either single or multiple tools were used. The document gives Assisted-by: Claude:claude-3-opus coccinelle sparse as an example.

Back then, Linus was not even convinced a dedicated tag was necessary and suggested the changelog body would do the job. But now, the kernel community seems to have settled on the tag anyway.

It's already in use

We covered this earlier in the week, but Greg Kroah-Hartman (GKH) seems to have had AI-assisted fuzzing running in his kernel tree for a while now, in a branch called "clanker." He started with the ksmbd and SMB code, found some potential issues, and submitted fixes with a note telling reviewers to verify everything independently before trusting any of it.

That is just about the workflow the new policy was written around. AI surfaces issues, a human with decades of kernel experience decides what is real, writes the fix, and takes responsibility. GKH being the one doing it is not a surprise given he is the stable kernel maintainer and has probably dealt with more bad patches than the others.

Other projects have gone in a different direction. Gentoo banned AI-generated contributions entirely in 2024, with its council citing copyright risk, code quality, and ethical concerns.

NetBSD's commit guidelines put LLM-generated code in the "tainted code" category, requiring written approval from the core developers before any of it goes in.

In contrast, Linux is not banning anything. Whether that turns out to be the sensible call or just a lenient one will depend on how seriously people actually take the "a human reviewed this" part.


Suggested Read 📖: Is a Clanker Being Used in Linux Development?

Linux Kernel 7.0 is Out With Improvements Across the Board for Intel, AMD, and Storage

13. April 2026 um 06:08

The development of the Linux kernel moves fast, and the 7.0 release is no exception. Around the same time as this release, a patch queued for Linux 7.1 has kicked off what will eventually be the end of i486 CPU support in the kernel.

But that's a story for another time. For now, let's focus on what Linux 7.0 brings to the table.

Head penguin, Linus Torvalds, had the following words to say regarding the release:

The last week of the release continued the same "lots of small fixes" trend, but it all really does seem pretty benign, so I've tagged the final 7.0 and pushed it out.

I suspect it's a lot of AI tool use that will keep finding corner cases for us for a while, so this may be the "new normal" at least for a while. Only time will tell.
This coverage is based on the detailed reporting from Phoronix.

Linux Kernel 7.0: What's New?

The release is here, and before getting into the improvements, there is one thing worth getting out of the way first.

This is not a long-term support release. If your priority is stability and extended maintenance, this is not the kernel to land on. Instead, you could opt for Linux kernel 6.18, which is supported until December 2028.

Intel Upgrades

Linux 6.19 already added audio support for Intel Nova Lake S, but the standard Nova Lake (NVL) variant was left out. That's fixed in 7.0, and the difference between the two in terms of specs is mainly in core count (4 vs. 2).

Intel Arc users get something useful too. The Xe driver now exposes a lot more temperature data through the HWMON interface. Previously you got a single GPU core reading; now you get shutdown, critical, and max temperature limits, plus memory controller, PCIe, and individual vRAM channel temperatures.

Also for Panther Lake, GSC firmware loading and Protected Xe Path (PXP) support are in.

And lastly, Diamond Rapids (the upcoming Xeon successor to Granite Rapids) gets NTB driver support, which handles high-speed data transfers between separate systems over PCIe. It is expected to be helpful for distributed storage and cluster setups.

AMD Refinements

While the Zen 6 series of CPUs are still a while out, the kernel is already getting ready for it. Linux 7.0 merges perf events and metrics support for AMD Zen 6, covering performance counters for branch prediction, L1 and L2 cache activity, TLB activity, and uncore events like UMC command activity.

All of that is mainly useful for developers and admins doing performance profiling ahead of launch, and not something the average user will notice.

For virtualization, KVM picks up support for AMD ERAPS (Enhanced Return Address Predictor Security), a Zen 5 security feature. In VM scenarios, this bumps the Return Stack Buffer from 32 to 64 entries, letting guests make full use of the larger RSB.

AMD is also laying the groundwork for next-gen GPU hardware in 7.0, enabling new graphics IP blocks for what looks like an upcoming RDNA 4 successor and another RDNA 3.5 variant.

There are also hints of deeper NPU integration with future Radeon hardware, but AMD hasn't announced anything yet, so exact product details remain a mystery for now.

Better Storage Handling

XFS gets one of the more interesting additions this release called autonomous self-healing. A new xfs_healer daemon, managed by systemd, watches for metadata failures and I/O errors in real time and triggers repairs automatically while the filesystem stays mounted.

Btrfs picks up direct I/O support for block sizes larger than the kernel page size, falling back to buffered I/O when the data profile has duplication. There's also an experimental remap-tree feature, which introduces a translation layer for logical block addresses that lets the filesystem handle relocations and copy-on-write operations without physically moving or rewriting blocks.

EXT4 sees better write performance for concurrent direct I/O writes to multiple files by deferring the splitting of unwritten extents to I/O completion. It also avoids unnecessary cache invalidation and forced ordered writes when appending with delayed allocation.

Miscellaneous Changes

Wrapping up this section, we have some other notable changes that made it into this release:

  • RISC-V gains user-space control-flow integrity (CFI) support.
  • WiFi 8 Ultra-High Reliability (UHR) groundwork lands in the networking stack.
  • Security bug report documentation gets an overhaul to help AI tools send more actionable reports.
  • Rust support is officially no longer experimental, with the kernel team formally declaring it is here to stay.
  • ASUS motherboards, including the Pro WS TRX50-SAGE WIFI A and ROG MAXIMUS X HERO, now have working sensor support.

Installing Linux Kernel 7.0

As always, those on rolling distros like Arch Linux and other distros like Fedora and its derivatives will get this new release very soon. For others on distros like Debian, Linux Mint, Ubuntu, MX Linux, etc. You will most likely not receive this upgrade.

If that doesn't work for you, then you could always install the latest mainline Linux kernel on your Ubuntu setup. And, this goes without saying, this is risky. If you end up borking your system, we are not to blame for it.

How to Take Screenshots in Linux Mint [Beginner's Tip]

12. April 2026 um 13:51
Von: Sreenath

Linux Mint is known for being simple and beginner friendly. It works out of the box with most essential features ready to use, so you don’t have to spend time setting things up. One such basic task is taking screenshots, and Mint makes it very easy even if you are completely new to Linux.

In this beginner's guide, we will look at the built-in screenshot tool in Linux Mint and the keyboard shortcuts you can use right away.

📋
This article is part of the Linux Mint beginner's tutorial series.

The GUI screenshot tool that you don't want to miss

Linux Mint provides a simple graphical interface for those who prefer a GUI solution for taking screenshots.

Beyond the basic options, the tool also includes a few useful features. Let’s take a look at them next.

First, open the Screenshot tool by searching for it in the start menu.

In the Linux Mint Start menu, search for Screenshot and open the Screenshot tool.
Open Screenshot Tool
💡
You can pin the Screenshot app to the taskbar for quick access.

The interface is simple and easy to understand. There are three main options:

  • Capture Screen: Takes a screenshot of the entire screen
  • Capture Window: Captures the active window
  • Capture Selection: Lets you select a specific area using left-click and drag to capture.
Linux Mint GNOME Screenshot Utility Interface.
Screenshot Tool Interface

After choosing the method, click the Take Screenshot button at the top left of the window.

Show mouse cursor in screenshot

In the Screenshot tool, you will find an option called Show Pointer. Enable this if you want the mouse pointer to be visible in your screenshots.

Show Pointer option in GNOME Screenshot Utility in Linux Mint.
Show Pointer

Take screenshot with a delay

You can also set a small delay before taking a screenshot.

🚧
This does not apply to keyboard shortcuts by default.

In the Screenshot tool, enter a value in seconds under the Delay in Seconds option.

Add a delay to taking screenshot in Linux Mint.
Add a Delay to Screenshot

Once set, the tool will wait for the specified time before capturing the screenshot when using the GUI. For example, if you set it to 5 seconds, the screenshot will be taken after a 5 second delay.

💡
One common use case for delay is capturing the mouse cursor in window or area screenshots. Without a delay, the screenshot is taken instantly, so you do not get time to move the cursor from the Screenshot tool to the target application or position it properly.

Using keyboard shortcuts

If you prefer not to open a GUI app every time you take a screenshot, that is not a problem. Linux Mint provides keyboard shortcuts that let you quickly capture the screen in different ways.

Take the screenshot of entire screen

You can press the PrtScr key on your keyboard to capture the entire screen.

After taking the screenshot, you will be prompted to either save it with a name or copy it to the clipboard. This works well for basic use.

However, this can feel limited if you only want to capture a small part of the screen. The good news is that Linux Mint also provides an easy way to do that.

Take the screenshot of an area

To take the screenshot of a specific area, use the Shift + PrtScr shortcut.

Your screen will dim slightly and the cursor will change to a plus sign. Click, hold, and drag to select the area you want to capture.

Once you release the mouse button, you can choose to copy the screenshot or save it.

🚧
Keep in mind that you cannot adjust the selection after releasing the click, so make sure to select the area carefully.

Take screenshot of a window

Sometimes, you may want to capture only the currently active window. While you can do this using the area selection method, using a shortcut is much more convenient.

Press Alt + PrtScr to take a screenshot of the active window.

There are a few things to keep in mind. If a menu is open inside the window, like a top menu or a right-click context menu, this shortcut may not work.

🚧
In my case, I was not able to use any of the screenshot shortcuts if the window in focus has a menu opened. In this case, you need to set a delay to take the screenshot, which we will see in a later section.

Also, if a dialog box is open, the tool will capture whichever window is active at that moment, whether it is the main window or the dialog.

Record the screen

Many people do not realize that Linux Mint also includes a built-in screen recorder. It is not visible in the menus, so it is easy to miss.

Press Shift + Ctrl + Alt + R to start recording your screen. Use the same key when a recording is active to stop recording.

This is a basic tool, so do not expect features like those in dedicated applications such as OBS Studio or SimpleScreenRecorder. It simply records your entire screen.

When you stop the recording, the video file is saved in the Videos folder inside your Home directory.

Custom Shortcuts

In the previous section, we saw that the GUI tool offers options like delay and showing the mouse pointer, which are not available with the default keyboard shortcuts.

However, this does not mean you are limited. In Linux Mint, you can create custom shortcuts to include these actions as well.

The screenshot options

Before setting up custom screenshot shortcuts, it helps to understand the available options. Linux Mint uses the GNOME Screenshot tool for both the GUI and keyboard based screenshots.

GNOME Screenshot provides several useful options, along with many more that you can explore in its man page.

  • gnome-screenshot -w: Take the screenshot of current active window.
  • gnome-screenshot -a: Take the screenshot of a select region by click and drag.
  • gnome-screenshot -d 5: Add a 5 second delay before taking a screenshot of the entire screen.
  • gnome-screenshot -d 5 -p: Apply a 5 second delay and include pointer in the screenshot.
  • gnome-screenshot -d 5 -a, gnome-screenshot -d 5 -w: Take screenshot of select area/window respectively with a 5 second delay.

Setting custom screenshot shortcuts

Search for and open Keyboard from the start menu.

Search for keyboard in Start Menu and open the Keyboard application from the list.
Open Keyboard Application

Go to the Shortcuts tab and then select Custom Shortcuts. Click on the Add custom shortcut button.

In the shortcuts tab of Keyboard application, go to Custom Shortcut and select the Add custom shortcut button.
Add Custom Shortcut

Now, enter a name for the shortcut. For example, you can use "Take screenshot of an area with a delay" in the Name field.

Add a name for the shortcut in the name field and add a command that you want to execute when the key is pressed.
Enter name and command

In the command field, enter the required command. For example, use gnome-screenshot -d 5 -a, and then click the Add button.

The command will now be listed. To assign a shortcut, select it under Keyboard shortcuts and click on the Unassigned option in the Keyboard bindings section.

0:00
/0:13

Add the keybinding to the custom command.

You will be prompted to press a key combination. Press the shortcut you want to use.

You can repeat the same steps to create and assign shortcuts for other commands based on your needs.

Other screenshot tools

Sometimes, basic screenshots are not enough. You may want to annotate an image or add borders and other adjustments.

These are image editing features, and they are not available in the default Screenshot tool in Linux Mint.

For such needs, you can use third party screenshot tools that offer more control and customization.

We have a separate article that covers screenshot tools you can use in more detail. You can refer to it to find options that suit different needs and use cases.

As a quick note, Flameshot and Ksnip are two good screenshot tools you can use for editing and customization. You can also use Gradia that also provides basic editing.

Did you find it useful? Feel free to share your thoughts in the comments.

Privacy Messenger Session Is Staring Down a 90-Day Countdown to Obscurity

10. April 2026 um 20:24

If you care about privacy and don't take too well to governments and Big Tech companies snooping on your messages, then Session has probably come up at some point. It's a free, open source, end-to-end encrypted messaging app that doesn't ask for your phone number or email to sign up.

Messages are routed through an onion network rather than a central server, and the combination of no-metadata messaging, anonymous sign-up, and decentralized architecture has earned it a loyal following among privacy-conscious users.

Unfortunately, the project has sent out a mayday call as it risks closure.

A call for help

Your donations have helped, and the Session Technology Foundation (STF) has received enough funding to support critical operations for 90 days.

This means that Session will remain available on the app stores and essential services (such as the file server and push notification…

— Session (@session_app) April 9, 2026

The Session Technology Foundation (STF) sent out what can only be described as a distress signal, announcing that the app's survival is now in serious peril. The day it was posted on was also the last working day for all paid staff and developers at the STF.

From that point on, Session is being kept running entirely by volunteers.

The donations that they received earlier are enough to keep critical infrastructure online until July 8, but not nearly enough to retain a development team. With nobody left on payroll, development has been paused.

Due to that, introducing new features is off the table, existing bugs will most likely go unaddressed, and the STF says new releases are unlikely during this period.

Session co-founder Chris McCabe had already flagged the trouble coming. In a personal appeal published earlier in March, he wrote that the organizations safeguarding Session had faced many challenges over the years and that the project's very survival was now at risk.

He had concluded by appealing that:

The project is on a path to self-sustainability, but the future is fragile. If every Session user contributed just one dollar, it would go a long way towards Session reaching sustainability. If you've ever considered donating, now is the time to act.

The above didn't accomplish enough to change the outcome, so the Session folks had to sound the alarm. The foundation says it needs $1 million to complete the work still in progress.

That includes Protocol v2, which adds forward secrecy (PFS), post-quantum cryptography, and improved device management, as well as Session Pro, a subscription tier intended to put the project on a self-sustaining footing.

If that goal is hit, the STF says it hopes Session could stand on its own without needing to go back to the community for more.

As of writing, $65,000 of that $1 million has been raised. Anyone who wants to see this privacy-focused messaging app survive, especially at a time when surveillance is only getting worse, can donate at getsession.org/donate.


Suggested Read 📖: Session's Other Co-Founder Thinks You Don't Need to Ditch WhatsApp

Good News! France Starts Plan to Replace Windows With Linux on Government Desktops

10. April 2026 um 17:16

France's national digital directorate, DINUM, has announced (in French) it is moving its workstations from Windows to Linux. The announcement came out of an interministerial seminar held on April 8, organised jointly by the Directorate General for Enterprise (DGE), the National Agency for Information Systems Security (ANSSI), and the State Procurement Directorate (DAE).

The Linux switch is not the only move on the table. France's national health insurance body, CNAM, is migrating 80,000 of its agents to a set of homegrown tools: Tchap for messaging, Visio for video calls (more on this later), and France transfert for file transfers.

The country's national health data platform is also set to move to a sovereign solution by the end of 2026.

Beyond the immediate moves, the seminar laid out a broader plan. DINUM will coordinate an interministerial effort built around forming coalitions between ministries, public operators, and private sector players, with interoperability standards at the core (the Open Interop and Open Buro initiatives are specifically named).

Every French ministry, including public operators, will be required to submit its own non-European software reduction plan by Autumn 2026.

The plan is expected to cover things like workstations, collaboration tools, antivirus, AI, databases, virtualization, and network equipment. A first set of "Industrial Digital Meetings" is planned for June 2026, where public-private coalitions are expected to be formalized.

Speaking on this initiative, Anne Le Hénanff, Minister Delegate for Artificial Intelligence and Digital Affairs, added that (translated from French):

Digital sovereignty is not optional — it is a strategic necessity. Europe must equip itself with the means to match its ambitions, and France is leading by example by accelerating the shift to sovereign, interoperable, and sustainable solutions.
By reducing our dependence on non-European solutions, the State sends a clear message: that of a public authority taking back control of its technological choices in service of its digital sovereignty.

You might remember, a few months earlier, France set out on a similar path for video conferencing. The country mandated that every government department switch to Visio, its homegrown, MIT-licensed alternative to Teams and Zoom by 2027.

Part of the broader La Suite Numérique initiative, it had already been tested with 40,000 users across departments before the mandate was announced. So this move looks like an even more promising one, and we shall keep an eye on how this pans out.


Suggested Read 📖: ONLYOFFICE Gets Forked

Is a Clanker Being Used to Carry Out AI Fuzzing in the Linux Kernel?

10. April 2026 um 13:16

With the rise of AI and humanoid robots, the word "Clanker" is being used to describe such solutions, and rightly so. In their current state, these are quite primitive, and while they can act like something resembling human intelligence, they still can't match what nature cooked up.

Now that terminology has made its way into the Linux kernel thanks to Greg Kroah-Hartman (GKH), the Linux stable kernel maintainer and the closest thing the project has to a second-in-command.

He has been quietly running what looks like an AI-assisted fuzzing tool on the kernel that lives in a branch called "clanker" on his working kernel tree. Before you ask, fuzzing is a method of automated software testing that bombards code with unexpected, malformed, or random inputs to trigger crashes, memory errors, and other misbehavior.

It is a critical line of defense for a massive codebase like Linux.

How it started

a post by greg kroah-hartman that lays out how he is excercising using some new fuzzing tols

It began with the ksmbd and SMB code. GKH filed a three-patch series after running his new tooling against it, describing the motivation quite simply. He picked that code because it was easy to set up and test locally with virtual machines.

What the fuzzer flagged were potential problems specific to scenarios involving an "untrusted" client. The three fixes that came out of it addressed an EaNameLength validation gap in smb2_get_ea(), a missing bounds check that required three sub-authorities before reading sub_auth[2], and a mechToken memory leak that occurred when SPNEGO decode fails after token allocation.

GKH was very direct about the nature of the patches, telling reviewers: "please don't trust them at all and verify that I'm not just making this all up before accepting them."

These pictures show the Clanker T1000 in operation.

It does not stop there. The clanker branch has since accumulated patches across a wide range of subsystems, including USB, HID, WiFi, LoongArch, networking, and more.

Who is GKH?

If you are not well versed with the kernel world, GKH is one of the most influential people in Linux development.

He has been maintaining the stable kernel branch for quite a while now, which means every long-term support kernel that powers servers, smartphones, embedded devices, and pretty much everything else running Linux passes through his hands.

He also wrote Linux Kernel in a Nutshell back in 2006, which is freely available under a Creative Commons license. It remains one of the more approachable references for anyone trying to understand kernel configuration and building, and it is long overdue for a new edition (hint hint).

Linus has been thinking about this too

Speaking at Open Source Summit Japan last year, Linus Torvalds said the upcoming Linux Kernel Maintainer Summit will address "expanding our tooling and our policies when it comes to using AI for tooling."

He also mentioned running an internal AI experiment where the tool reviewed a merge he had objected to. The AI not only agreed with his objections but found additional issues to fix.

Linus called that a good sign, while asserting that he is "much less interested in AI for writing code" and more interested in AI as a tool for maintenance, patch checking, and code review.

AI should assist, not replace

There is an important distinction worth making here. What GKH appears to be doing here is not having AI write kernel code. The fuzzer surfaces potential bugs; a human with decades of kernel experience reviews them, writes the actual fixes, and takes responsibility for what gets submitted.

If that's the case, then this is the sensible approach, and it mirrors what other open source projects have been formalizing. LLVM, for instance, adopted a "human in the loop" AI policy earlier this year, requiring contributors to review and understand everything they submit, regardless of how it was created.


Suggested Read 📖: Greg Kroah-Hartman Bestowed With The European Open Source Award

Microsoft Locked Out VeraCrypt, WireGuard, and Windscribe from Pushing Windows Updates

10. April 2026 um 06:41

Microsoft has had a complicated relationship with the open source world. VSCode, TypeScript, and .NET are all projects it created, and its acquisition of GitHub put it in charge of the world's largest code hosting platform.

But it is also the same company that bakes telemetry into Windows by default and has been aggressively pushing Copilot AI into every corner of its software. That last part especially has been nudging a growing number of people toward open alternatives.

And now, a wave of developer account suspensions has given some open source developers a new headache.

What's happening?

this photo shows a forum post by mounir idrassi talking about the unfair suspension of their microsoft account that was used to sign windows drivers and the bootloader

Microsoft rolled out mandatory account verification for all partners enrolled in the Windows Hardware Program who had not completed verification since April 2024. The requirement kicked in on October 16, 2025, giving partners 30 days from notification to verify their identity with a government-issued ID.

Plus, that ID has to match the name of the Partner Center primary contact. Miss the deadline or fail verification, and your account gets suspended with no further submissions allowed.

This matters because signing Windows kernel drivers requires one of these accounts. Without it, developers cannot push driver-signed updates for Windows, and Windows will flag unsigned drivers, blocking them from loading at the kernel level.

Three major open source projects found this out the hard way. VeraCrypt, WireGuard, and Windscribe all had their developer accounts suspended, cutting off their ability to ship updates on Windows.

It appears @Microsoft is actively suspending developer accounts with no warning or reason of various security tools like VeraCrypt, WireGuard and also Windscribe. We've had this VERIFIED account for 8+ years to sign our drivers.

We've been trying to resolve this for over a… https://t.co/iwkryuwKuO pic.twitter.com/7VcnAQIbnP

— Windscribe (@windscribecom) April 8, 2026

VeraCrypt developer Mounir Idrassi was the first to go public. In a SourceForge forum post, he wrote that Microsoft had terminated his account with no prior warning, no explanation, and no option to appeal.

Repeated attempts to reach Microsoft through official channels got him nothing but automated replies. The suspension hit his day job too, not just VeraCrypt.

WireGuard creator Jason Donenfeld hit the same wall a couple of weeks later, when he went to certify a new WireGuard kernel driver for Windows and found his account showing as access restricted. He eventually tracked down a Microsoft appeals process, but it carried a 60-day response window.

Windscribe's situation was arguably the messiest. The company says it had held a verified Partner Center account for over eight years and spent more than a month trying to sort things out before going public.

Moreover, once an account is suspended, Partner Center blocks users from opening a support ticket directly.

What now?

This eventually got Microsoft's attention as Scott Hanselman, VP and Member of Technical Staff at Microsoft and GitHub stepped in on X to say the accounts would be fixed. He pointed to the October 2025 blog post (linked earlier) and said the company had been sending emails to affected partners since then.

Scott confirmed he had personally reached out to both Mounir and Jason to get their accounts unblocked, and that fixes were already in progress.

Anyway, this doesn't look good, and leaving developers of critical security software without recourse for weeks only erodes trust. But, in the end, this won't really affect a behemoth like Microsoft, who has a dominating hold on the operating system market.


Suggested Read 📖: Proton Workspace and Meet launched as alternatives to Big Tech offerings

❌