In just three decades, the internet has transformed from a vibrant network of personal webpages and forums into a landscape dominated by ultra-short videos, algorithmic curation, and passive scrolling. This article traces the evolution of online content creation—from thoughtful, long-lived expressions to impulsive, ephemeral media—and examines what we've gained and lost along the way. If you've ever wondered why the internet feels faster but emptier, this deep dive connects the dots.
In this article, I explore the various ways I integrate large language models (LLMs) like ChatGPT and LLAMA into my daily work and personal projects. From summarizing scientific papers and refining research communication to enhancing creativity through AI-generated artwork and automating everyday tasks, LLMs have become invaluable tools in my workflow. I also discuss how AI can assist in coding, structuring complex ideas, and even helping friends navigate social and emotional challenges. Whether you're a researcher, a maker, or simply curious about AI's capabilities, this article offers insights into practical, real-world applications of LLMs.
Large Language Models (LLMs) exhibit behavior that appears empathetic, leading to discussions about whether they genuinely "feel" emotions or merely simulate them. This article explores how LLMs generate emotionally resonant responses, relying on statistical pattern recognition rather than subjective experience. It presents arguments from functionalism and behaviorism that suggest LLMs may be functionally equivalent to emotional beings, while critics highlight the lack of neurochemical processes and self-initiated emotional states. The discussion also raises philosophical questions about human cognition, given that psychology often models human emotions statistically. As AI progresses, distinguishing between true emotional experience and advanced simulation will remain a key debate in both science and ethics.
Deepfake technology is no longer a futuristic threat—it is an unstoppable reality that is reshaping the way we perceive media, trust information, and engage with digital content. From hyper-realistic videos to AI-generated images that even experts struggle to distinguish from real ones, these tools are now widely accessible to anyone with a consumer-grade computer. Governments attempt to regulate and control the spread of such technologies, but like encryption before it, deepfakes cannot be banned or contained. Instead of resisting the inevitable, society must adapt by enhancing critical thinking, questioning sources, and moving beyond blind trust in media appearances. The implications of deepfakes reach far beyond political manipulation and misinformation; they have the potential to impact personal lives through blackmail, fabricated scandals, and AI-generated smear campaigns. Society must not only acknowledge these risks but also actively counteract their effectiveness by changing how we evaluate and react to digital content. Education in media literacy, the rejection of personality cults, and a stronger emphasis on factual verification over emotional reactions will be essential in mitigating the disruptive influence of deepfake technology. The future will not be about stopping deepfakes—it will be about learning how to navigate a world where seeing is no longer believing.
Generative Pre-trained Transformers (GPTs) and Large Language Models (LLMs) have revolutionized the field of artificial intelligence by demonstrating advanced capabilities in understanding and generating human-like language. This article peeks (in a non technical way) into the inner workings of GPTs, from their neural network foundations to innovative mechanisms like attention and contextual embeddings. It explores how these systems generalize patterns from vast training datasets to produce meaningful, context-aware responses. Contrary to common misconceptions, LLMs do not memorize training data but instead apply logical reasoning and learned structures to novel situations. While their static weights differ from the dynamic learning of the human brain, LLMs exhibit creativity and adaptability through mechanisms like transfer learning and external data integration. The article also examines the boundaries between simulation and consciousness, highlighting the transformative potential of LLMs for applications like virtual assistants and personalized systems while addressing their constraints.
Generative Pre-trained Transformers (GPTs) like OpenAI’s ChatGPT have transformed the landscape of artificial intelligence by offering the ability to generate human-like text, adapt to diverse tasks, and understand context and patterns across various domains. This blog post provides a non-technical, broad overview of GPTs, emphasizing their versatility in applications such as brainstorming, complex problem-solving, data analysis, and more. Despite their impressive capabilities, GPTs are often misunderstood as simple search engines or seen as tools that merely reproduce information from their training data, leading to misconceptions about their potential and legality. The post aims to demystify GPTs by highlighting their unique strengths and differentiating them from traditional tools. By showcasing practical applications and explaining what GPTs can and cannot do, the article seeks to clarify these misunderstandings and demonstrate how GPTs can be powerful partners in fostering innovation, creativity, and effective problem-solving. This understanding will empower readers to leverage GPT technology more effectively in various aspects of their personal and professional lives.
This is an opinion article on why I think one should use version control systems such as git or SVN for everything including writing tasks for thesis, books or articles as well as websites and other stuff in addition to software development.
Some thoughts why in my opinion Jabber/XMPP is currently one of the best chat system that's currently available - and much more than just a simple chat system.
Mini introduction into the nature of the Internet and it's basic working principles.
Short introduction on how to use lagg(4) to perform automatic switching from ethernet to WiFi and back on FreeBSD.
Because it's sometimes said TOR hidden services are only used by criminals this post lists a few more credible services and some reasons why one might use them.
This article contains a short tutorial about multifactor authentication, various factors that may be used and a tutorial on how to use the Yubikey hardware token for web- and local as well as SSH (pam) authentication on FreeBSD.
A short description how to use TOR as a normal user and what to be aware of when using TOR
The blogsphere is a web of interconnected websites. As usual websites are linked like all pages in the WWW but on the other hand they provide an easy way of notifying each other of being referenced by a third party. Pingback is - besides trackbacks - the (XML-RPC based) technology used for that notification.
This blog post should provide a short overview of TAN alternatives and methods as well as their pitfalls and security
An easy method to add tags to your static Jekyll generated pages and blogs
A short description (in german) how to use the facebook chat with it's XMPP interface.
Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)
This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/