“What Nginx Logs Prove about AI Traffic vs Referral Traffic,” n.d. https://surfacedby.com/blog/nginx-logs-ai-traffic-vs-referral-traffic
Aparently some LLMs from big tech do not read robots.txt. Others dont even query your website when asked and re-use existing search cache instead. Last group directly visits your website and respect your AI scraping permissions. But yeah… its only anthropic model. Rest does really shady shit.
.
“Hardkernel ODROID EMMC Guide / Performance Benchmarking,” n.d. https://jamesachambers.com/hardkernel-odroid-emmc-guide-performance-benchmarking/
Some time ago I bought myself Pinetab pro which I bricked by installing OS from SD card directly to its EMMC chip. Sadly this same chip had everything needed to bootload the OS.
Soooo now I need EMMC adapter and flash it directly from some other computer. While doing research on how to do it I found this post and it has detailed guide on how to handle EMMC chips.
.
Soooo now I need EMMC adapter and flash it directly from some other computer. While doing research on how to do it I found this post and it has detailed guide on how to handle EMMC chips.
“Mythos and Cybersecurity,” n.d. https://www.schneier.com/blog/archives/2026/04/mythos-and-cybersecurity.html
Really good blog post which tries to put light on recent Mythos, a recent Anthropic model which targets security researchers.
I agree with the author that not giving everyone access to it was a wise decision from Anthropic. It’s also a decision which OpenAI would not take.
Later, the author contemplates who should govern such models as they’re currently in the hands of a private company.
Here I would like to add a couple of my own thoughts. I really think such models are not something unique. In 2 maybe 3 years, small open models trained by independent developers will surely take the market.
Anthropic is a company which does not have its own monopoly on source code storage. I would be worried if such a model was created by Microsoft as this could mean that you need a crazy amount of data from, for example, Github. But I don’t think there is anything unique about what Anthropic is doing. Soon, Deepseek team or any other researcher will come up with a similar model and make it available for everyone.
Are we ready for this? I don’t think so. Current measures to prevent humans from being misled by LLMs do not exist. I think it’s a clear path to a dead internet where most of the content is AI generated.
AI which can find vulnerabilities in open software, has a huge potential. I think with wide adoption it will really improve the quality of the code. Not without taking measures on top of development like code review or some other way that we have not come up with yet.
.
I agree with the author that not giving everyone access to it was a wise decision from Anthropic. It’s also a decision which OpenAI would not take.
Later, the author contemplates who should govern such models as they’re currently in the hands of a private company.
Here I would like to add a couple of my own thoughts. I really think such models are not something unique. In 2 maybe 3 years, small open models trained by independent developers will surely take the market.
Anthropic is a company which does not have its own monopoly on source code storage. I would be worried if such a model was created by Microsoft as this could mean that you need a crazy amount of data from, for example, Github. But I don’t think there is anything unique about what Anthropic is doing. Soon, Deepseek team or any other researcher will come up with a similar model and make it available for everyone.
Are we ready for this? I don’t think so. Current measures to prevent humans from being misled by LLMs do not exist. I think it’s a clear path to a dead internet where most of the content is AI generated.
AI which can find vulnerabilities in open software, has a huge potential. I think with wide adoption it will really improve the quality of the code. Not without taking measures on top of development like code review or some other way that we have not come up with yet.