Sunday, 7 September 2014

How I view our species and our world

My recent blog post "The resource leak bug of our civilization" has gathered some interest recently, especially after getting noticed by Ran Prieur in his blog. I therefore decided to translate another essay to give it a wider context. Titled "A few words about humans and the world", it is intended to be a kind of wholesome summary of my worldview, and it is especially intended for people who have had difficulties in understanding the basis of some of my opinions.

---

This writeup is supposed to be concise rather than convincing. It therefore skips a lot of argumentation, linking and breakdowns that might be considered necessary by some. I'll get back to them in more specific texts.

1. Constructions

Humans are builders. We build not only houses, devices and production machinery, but also cultures, conceptual systems and worldviews. Various constructions can be useful as tools, however we also have an unfortunate tendency to chain ourselves to them.

Right now, humankind has chained itself to the worship of abundance: it is imperative to produce and consume more and more of everything. Quantitative growth is imagined to be the same thing as progress. Especially during the last hundred years, the theology of abundance has invaded so deep and profound levels, that most people don't even realize its effect. It's not just about consumerism on a superficial level, but about the whole economic system and worldview.

Extreme examples of growth ideology can be easily found in the digital world, where it manifests as a raised-to-the-power-two version. What happens if worshippers of abundance get their hands on a virtual world where the amount of available resources increases exponentially? Right, they will start bloating up the use of resources, sometimes even for its own sake. It is not at all uncommon to require a thousand times more memory and computational power than necessary for a given task. Mindless complexity and purposeless activities are equated with technological advancement. The tools and methods the virtual world is being built with have been designed from the point of view of idealized expansion, so it is difficult to even imagine alternatives.

I have some background in a branch of hacker culture, demoscene, where the highest ideal is to use minimal resources in an optimal way. The nature of the most valued progress there is condensing rather than expanding: doing new things under ever stricter limitations. This has helped me perceive the distortions of the digital world and their counterparts in the material world.

In everyday life, the worship of growth shows up, above all, as complexification of everything. It is becoming increasingly difficult to understand various socio-economic networks or even the functionality of ordinary technological devices. This alienates people from the basics of their lives. Many try to fight this alienation by creating pockets of understandability. Escapism, conservatism and extremism rise. On the other hand, there is also an increase in do-it-yourself culture and longing to a more self-sufficient way of life. People should be encouraged into these latter-mentioned, positive means to counter alienation instead of channels that increase conflicts.

An ever greater portion of techno-economical structures consists of useless clutter, so-called economic tumors. They form when various decision-makers attempt to keep their acquired cake-pieces as big as possible. Unnecessary complexity slows down and unilateralizes progress instead of being a requirement for it. Expansion needs to be balanced with contraction -- you can't breath in without breating out.

The current phase of expansion is finally about to end, since the fossil fuels that made it possible are getting rarer, and we still don't know about an equally powerful replacement. As the phase took so long, the transition into contraction will be difficult to many. An increasingly bigger portion of economy will escape into the digital world, where it is possible to maintain the unrealistic swelling longer than in the material world.

Dependencies of production can be depicted as a pyramid where the things on the higher levels are built from the things below. In today's world, people always try to build on the top, so the result looks more like a shaky tower than a pyramid. Most new things could be easily built at lower levels. The lowest levels of the pyramid could also be strengthened by giving more room for various self-sufficient communities, local production and low-tech inventions. Technological and cultural evolution is not a one-dimensional road where "forward" and "backward" are the only alternatives. Rather, it is a network of possibilities burgeoning towards every direction, and even its strange side-loops are worth knowing.

2. Diversity

It is often assumed that growth would increase the amount of available options. In principle, this is true -- there are more and more different products on store shelves -- but their differences are more and more superficial. The same is true with ways of life: it is increasingly difficult to choose a way of life that wouldn't be attached to the same chains of production or models of thinking as every other way of life. The alternatives boil down into the same basic consumer-whoredom.

Proprietors overstandardize the world with their choices, but this probably isn't very conscious activity. When there are enough decision-makers who play the same game with the same rules, the world will eventually shape around these rules (including all the ingrained bugs and glitches). Conspiracy theories or evil-incarnates are therefore not required to explain what's going on.

The human-built machinery is getting increasingly more complex, so it is also increasingly more difficult to talk about it in concrete terms. Many therefore seek help from conceptual tools such as economic theories, legal terminology or ideologies, and subsequently forget that they are just tools. Nowadays, money- and production-centered ways of conceptualizing the world have become so dominant that people often don't realize that there are other alternatives.

Diversity helps nature adapt to changes and recover from disasters. For the same reason, human culture should be as diverse as possible especially now that the future is very uncertain and we have already started to crash into the wall. It is necessary to make it considerably less difficult to choose radically different ways of life. Much more room should be given to experimental societies. Small and unique languages and cultures should be treasured.

There's no one-size-fits-all model that would be best for everyone. However, I believe that most people would be happiest in a society that actively maintains human rights and makes certain that no one is left behind. Dictatorship of majority, however, is not that crucial feature of a political system in a world where everyone can freely choose a suitable system. Regardless, dissidents should be given enough room in every society: everyone doesn't necessarily have the chance to choose a society, and excessive unanimosity tends to be quite harmful anyway.

3. Consciousness

Thousands of years ago, the passion for construction became so overwhelming that the quest for mental refinement didn't keep with the pace. I regard this as the main reason why human beings are so prone to become slaves of their constructs. Rational analysis is the only mental skill that has been nurtured somewhat sufficiently, and even rational analysis often becomes just a tool for various emotional outbursts and desires. Even very intelligent people may be completely lost with their emotions and motivations, making them inclined to adopt ridiculously one-dimensional thought constructs.

Putting one's own herd before anyone else is an example of attitude that may work among small hunter-gatherer groups, but which should have no more place in the modern civilization. A population that has the intellectual facilities to build global networks of cause and effect should also have the ability to make decisions on the corresponding level of understanding instead of being driven by pre-intellectual instincts.

Assuming that humankind still wants to maintain complex societal and technological structures, it should fill its consciousness gap. Any school system should teach the understanding and control of one's own mind at least as seriously as reading and writing. New practical mental methods, suitable for an ever greater variety of people, should be developed at least as passionately as new material technology.

For many people, worldview is still primarily a way of expressing one's herd instincts. They argue and even fight about whose worldview is superior. It is hopeful that future will bring a more individual attitude towards them: there is no single "truth" but different ways for conceptualizing the reality. A way that is suitable for one mind may be even destructive to another mind. Science produces facts and theories that can be used as building blocks for different worldviews, but it is not possible to put these worldviews into an objective order of preference.

4. Life

The purposes of life for individual human beings stem from their individual worldviews, so it is futile to suggest rules-of-thumb that suit all of them. It is much easier to talk about the purpose of biological life, however.

The basic nature of life, based on how life is generally defined, is active self-preservation: life continuously maintains its form, spreads and adapts into different circumstances. The biological role of a living being is therefore to be part of an ecosystem, strengthening the ecosystem's potential for continued existence.

The longer there is life on Earth, the more likely it is to expand into outer space at some point of time. This expansion may already take place during the human era, but I don't think we should specifically strive for it before we have learned how to behave non-destructively. However, I'm all for the production of raw material and energy in space, if it helps us abstain from raping our home planet.

At their best, intelligent lifeforms could function as some sort of gardeners. Gardeners that strengthen and protect the life in their respective homeworlds and help spread it to other spheres. However, I don't dare to suggest that the current human species have the prequisites for this kind of role. At this moment, we are so lost that we couldn't become even a galactic plague.

Some people regard the human species as a mistake of evolution and want us to abandon everything that differentiates us from other animals. I see no problem per se in the natural behavior of homo sapiens, however: there's just an unfortunate misbalance of traits. We shouldn't therefore abandon reason, abstractions or constructivity but rebalance them with more conscious self-improvement and mental refinement.

5. The end of the world

It is not possible to save the world, if it means saving the current societies and consumer-centric lifestyles. At most, we can soften the crash a little bit. It is therefore more relevant to concentrate on activities that make the postapocalyptic world more life-friendly.

As there is still an increasing amount of communications technology and automation in the world, and the privileged even have increasingly more free time, these facilities should be used right now for sowing the seeds for a better world. If we start building alternative constructs only when the circumstances force us to, the transition will be extremely painful.

People increasingly dwell in easiness bubbles facilitated by technology. It is therefore a good idea to bring suitable signals and facilities into these bubbles. Video game technology, for example, can be used to help reclaim one's mind, life and material environment. Entertainment in general can be used to increase the interest in such a reclaim.

Many people imagine progress as a kind of unidirectional growth curve and therefore regard the postapocalyptic era as a "return to the past". However, the future world is more likely to become radically different from any previous historical era -- regardless of some possible "old-fashioned" aspects. It may therefore more relevant to use fantasy rather than history to envision the future.

Tuesday, 5 August 2014

The resource leak bug of our civilization


A couple of months ago, Trixter of Hornet released a demo called "8088 Domination", which shows off real-time video and audio playback on the original 1981 IBM PC. This demo, among many others, contrasts favorably against today's wasteful use of computing resources.

When people try to explain the wastefulness of today's computing, they commonly offer something I call "tradeoff hypothesis". According to this hypothesis, the wastefulness of software would be compensated by flexibility, reliability, maintability, and perhaps most importantly, cheap programming work. Even Trixter himself favors this explanation.

I used to believe in the tradeoff hypothesis as well. I saw demo art on extreme platforms as a careful craft that attains incredible feats while sacrificing generality and development speed. However, during recent years, I have become increasingly convinced that the portion of true tradeoff is quite marginal. An ever-increasing portion of the waste comes from abstraction clutter that serves no purpose in final runtime code. Most of this clutter could be eliminated with more thoughtful tools and methods without any sacrifices. What we have been witnessing in computing world is nothing utilitarian but a reflection of a more general, inherent wastefulness, that stems from the internal issues of contemporary human civilization.

The bug


Our mainstream economic system is oriented towards maximal production and growth. This effectively means that participants are forced to maximize their portions of the cake in order to stay in the game. It is therefore necessary to insert useless and even harmful "tumor material" in one's own economical portion in order to avoid losing one's position. This produces an ever-growing global parasite fungus that manifests as things like black boxes, planned obsolescence and artificial creation of needs.

Using a software development metaphor, it can be said that our economic system has a fatal bug. A bug that continuously spawns new processes that allocate more and more resources without releasing them afterwards, eventually stopping the whole system from functioning. Of course, "bug" is a somewhat normative term, and many bugs can actually be reappropriated as useful features. However, resource leak bugs are very seldom useful for anything else than attacking the system from the outside.

Bugs are often regarded as necessary features by end-users who are not familiar with alternatives that lack the bug. This also applies to our society. Even if we realize the existence of the bug, we may regard it as a necessary evil because we don't know about anything else. Serious politicians rarely talk about trying to fix the bug. On the contrary, it is actually getting more common to embrace it instead. A group that calls itself "Libertarians" even builds their ethics on it. Another group called "Extropians" takes the maximization idea to the extreme by advocating an explosive expansion of humankind into outer space. In the so-called Kardashev scale, the developmental stage of a civilization is straightforwardly equated with how much stellar energy it can harness for production-for-its-own-sake.

How the bug manifests in computing


What happens if you give this buggy civilization a virtual world where the abundance of resources grows exponentially, as in Moore's law? Exactly: it adopts the extropian attitude, aggressively harnessing as much resources as it can. Since the computing world is virtually limitless, it can serve as an interesting laboratory example where the growth-for-its-own-sake ideology takes a rather pure and extreme form. Nearly every methodology, language and tool used in the virtual world focuses on cumulative growth while neglecting many other aspects.

To concretize, consider web applications. There is a plethora of different browser versions and hardware configurations. It is difficult for developers to take all the diversity in account, so the problem has been solved by encapsulation: monolithic libraries (such as Jquery) that provide cross-browser-compatible utility blocks for client-side scripting. Also, many websites share similar basic functionality, so it would be a waste of labor time to implement everything specifically for each application. This problem has also been solved with encapsulation: huge frameworks and engines that can be customized for specific needs. These masses of code have usually been built upon previous masses of code (such as PHP) that have been designed for the exactly same purpose. Frameworks encapsulate legacy frameworks, and eventually, most of the computing resources are wasted by the intermediate bloat. Accumulation of unnecessary code dependencies also makes software more bug-prone, and debugging becomes increasingly difficult because of the ever-growing pile of potentially buggy intermediate layers. 

Software developers tend to use encapsulation as the default strategy for just about everything. It may feel like a simple, pragmatic and universal choice, but this feeling is mainly due to the tools and the philosophies they stem from. The tools make it simple to encapsulate and accumulate, and the industrial processes of software engineering emphasize these ideas. Alternatives remain underdeveloped. Mainstream tools make it far more cumbersome to do things like metacoding, static analysis and automatic code transformations, which would be far more relevant than static frameworks for problems such as cross-browser compatibility.

Tell a bunch of average software developers to design a sailship. They will do a web search for available modules. They will pick a wind power module and an electric engine module, which will be attached to some kind of a floating module. When someone mentions aero- or hydrodynamics, the group will respond by saying that elementary physics is a far too specialized area, and it is cheaper and more straight-forward to just combine pre-existing modules and pray that the combination will work sufficiently well.

Result: alienation


The way of building complex systems from more-or-less black boxes is also the way how our industrial society is constructed. Computing just takes it more extreme. Modularity in computing therefore relates very well to the technology criticism of philosophers such as Albert Borgmann.

In his 1984 book, Borgmann uses the term "service interface", which even sounds like software development terminology. Service interfaces often involve money. People who have a paid job, for example, can be regarded as modules that try to fulfill a set of requirements in order to remain acceptable pieces of the system. When using the money, they can be regarded as modules that consume services produced by other modules. What happens beyond the interface is considered irrelevant, and this irrelevance is a major source of alienation. Compare someone who grows and chops their own wood for heating to someone who works in forest industry and buys burnwood with the paycheck. In the former case, it is easier to get genuinely interested by all the aspects of forests and wood because they directly affect one's life. In the latter case, fulfilling the unit requirements is enough.

The way of perceiving the world as modules or devices operated via service interfaces is called "device paradigm" in Borgmann's work. This is contrasted against "focal things and practices" which tend to have a wider, non-encapsulated significance to one's life. Heating one's house with self-chopped wood is focal. Also arts and crafts have a lot of examples of focality. Borgmann urges a restoration of focal things and practices in order to counteract the alienating effects of the device paradigm.

It is increasingly difficult for computer users to avoid technological alienation. Systems become increasingly complex and genuine interest towards their inner workings may be discouraging. If you learn something from it, the information probably won't stay current for very long. If you modify it, subsequent software updates will break it. It is extremely difficult to develop a focal relationship with a modern technological system. Even hard-core technology enthusiasts tend to ignore most aspects of the systems they are interested in. When ever-complexifying computer systems grow ever deeper ingrained into our society, it becomes increasingly difficult to grasp even for those who are dedicated to understand it. Eventually even 
they will give up.

Chopping one's own wood may be a useful way to counteract the alienation of the classic industrial society, as oldschool factories and heating stoves still have some basics in common. In order to counteract the alienation caused by computer technology, however, we need to find new kind of focal things and practices that are more computerish. If they cannot be found, they need to be created. Crafting with low-complexity computer and electronic systems, including the creation of art based on them is my strongest candidate for such a focal practice among those practices that already exist in subcultural form.

The demoscene insight


I have been programming since my childhood, for nearly thirty years. I have been involved with the demoscene for nearly twenty years. During this time, I have grown a lot of angst towards various trends of computing.

Extreme categories of the demoscene -- namely, eight-bit democoding and extremely short programs -- have been helpful for me in managing this angst. These branches of the demoscene are a useful, countercultural mirror that contrasts against the trends of industrial software development and helps grasp its inherent problems.

Other subcultures have been far less useful for me in this endeavour. The mainstream of open source / free software, for example, is a copycat culture, despite its strong ideological dimension. It does not actively question the philosophies and methodologies of the growth-obsessed industry but actually embraces them when creating duplicate implementations of growth-obsessed software ideas.

Perhaps the strongest countercultural trend within the demoscene is the move of focus towards ever tighter size limitations, or as they say, "4k is the new 64k". This trend is diagonally opposite to what the growth-oriented society is doing, and forces to rethink even the deepest "best practices" of industrial software development. Encapsulation, for example, is still quite prominent in the 4k category (4klang is a monolith), but in 1k and smaller categories, finer methods are needed. When going downwards in size, paths considered dirty by the mainstream need to be embraced. Efficient exploration and taming of chaotic systems needs tools that are deeply different from what have been used before. Stephen Wolfram's ideas presented
in "A New Kind of Science" can perhaps provide useful insight for this endeavour.

Another important countercultural aspect of the demoscene is the relationship with computing platforms. The mainstream regards platforms as neutral devices that can be used to reach a predefined result, while the demoscene regards them as a kind of raw material that has a specific essence of its own. Size categories may also split platforms into subplatforms, each of which has its own essence. The mainstream wants to hide platform-specific characteristics by encapsulating them into uniform straightjackets, while the demoscene is more keen to find suitable esthetical approaches for each category. In Borgmannian terms, demoscene practices are more focal.

Demoscene-inspired practices may not be the wisest choice for pragmatic software development. However, they can be recommended for the development of a deeper relationship with technology and for diminishing the alienating effects of our growth-obsessed civilization.

What to do?


I am convinced that our civilization is already falling and this fall cannot be prevented. What we can do, however, is create seeds for something better. Now is the best time for doing this, as we still have plenty of spare time and resources especially in rich countries. We especially need to propagate the seeds towards laypeople who are already suffering from increasing alienation because of the ever more computerized technological culture. The masses must realize that alternatives are possible.

A lot of our current civilization is constructed around the resource leak bug. We must therefore deconstruct the civilization down to its elementary philosophies and develop new alternatives. Countercultural insights may be useful here. And since hacker subcultures have been forced to deal with the resource leak bug in its most extreme manifestation for some time already, their input can be particularly valuable.

Sunday, 14 July 2013

Slower Moore's law wouldn't be that bad.

Many aspects of the world of computing are dominated by Moore's law -- the phenomenon that the density of integrated circuits tends to double every two years. In mainstream thought, this is often equated with progress -- a deterministic forward-march towards the universal better along a metaphorical one-dimensional path. In this essay, I'm creating a fictional alternative timeline to bring up some more dimensions. A more moderate pace in Moore's law wouldn't necessarily be that bad after all.

Question: What if Moore's law had been progressing at a half speed since 1980?

I won't try to explain the point of divergence. I just accept that, since 1980, certain technological milestones would have been rarer and fewer. As a result, certain quantities would have doubled only once every four years instead of every two years. The RAM capacities, transistor counts, hard disk sizes and clock frequencies would have just reached the 1990s level in the year 2000, and in the year 2013, we would be on the 1996 level in regards to these variables.

I'm excluding some hardware-related variables from my speculation. Growth in telecommunications bandwidths, including the spread of broadband, are more related to infrastructural development than Moore's law. I also consider the technological development in things like batteries, radio tranceivers and LCD screens to be unrelated to Moore's law, so their progress would have been more or less unaffected apart from things like framebuffer and DSP logic.

1. Most milestones of computing culture would not have been postponed.

When I mentioned "the 1996 level", many readers probably envisioned a world where we would be "stuck in the year 1996" in all computing-related aspects. Noisy desktop Pentiums running Windows 95s and Netscape Navigators, with users staring in awe at rainbow-colored, static, GIF-animation-plagued websites over landline dialup connections. This tells about mainstream views about computer culture: everything is so one-dimensionally techno-determinist that even progress in purely software- and culture-related aspects is difficult to envision without their supposed hardware prequisities.

My view is that progress in computing and some other high technology has always been primarily cultural. Things don't become market hits straight after they're invented, and they don't get invented straight after they're technologically possible. For example, there were touchscreen-based mobile computers as early as 1993 (Apple Newton), but it took until 2010 before the cultural aspects were right for their widespread adoption (iPad). In the Slow-Moore world, therefore, a lot of people would have tablets just like in our world, despite the fact that they wouldn't probably have too many colors.

The mainstream adoption of the Internet would have taken place in the mid-1990s just like in the real world. 1987-equivalent hardware would have been completely sufficient for the boom to take place. Public online services such as Videotex and BBSes had been available since the late 1970s, and Minitel had already gathered millions of users in France in the 1980s, so even a dumb text terminal would have sufficed on the client side. The power of the Internet compared to its competitors was its global, free and decentralized nature, so it would have taken off among common people even without graphical web browsers.

Assuming that the Internet had become popular with character-based interfaces rather than multimedia-enhanced hypertext documents, its technical timeline would have become somewhat different. Terminal emulators would have eventually accumulated features in the same way as Netscape-like browsers did in the real world. RIPscrip is a real-world example of what could have become dominant: graphics images, GUI components and even sound and video on top of a dumb terminal connection. "Dynamic content" wouldn't require horrible kludges such as "AJAX" or "dynamic HTML", as the dumb terminal approach would have been interactive and dynamic enough to begin with. The gap between graphical and text-based applications would be narrower, as well as the gap between "pre-web" and "modern" online culture.

The development of social media was purely culture-driven: Facebook would have been technically possible already in the 1980s -- feeds based on friend lists don't require more per-user computation than, say, IRC channels. What was needed was cultural development: several "generations" of online services were required before all the relevant ideas came up. In general, most online services I can think of could have taken place in some form or another, about the same time as they appeared in the real world.

The obvious exceptions would be those services that require a prohibitive amount of server-side storage. An equivalent of Google Street View would perhaps just show rough shapes of the buildings instead of actual photographs. YouTube would focus on low-bitrate animations (something like Flash) rather than on full videos, as the default storage space available per user would be quite limited. Client-side video/audio playback wouldn't necessarily be an issue, since MPEG decompression hardware was already available in some consumer devices in the early 1990s (Amiga CD32) and would have therefore been feasible in the Slow-Moore year 2004. Users would just be more sensitive about disk space and would therefore avoid video formats for content that doesn't require actual video.

All the familiar video games would be there, as the resource-hogging aspects of games can generally be scaled down without losing the game itself. It could even be argued that there would be far more "AAA" titles available, assuming that the average budget per game would be lower due to lower fidelity requirements.

Domestic broadband connections would be there, but they would be more often implemented via per-apartment ethernet sockets than via per-apartment broadband modems. The amount of DSP logic required by some protocols (*DSL) would make per-apartment boxes rather expensive compared to the installation of some additional physical wires. In rural areas, traditional telephone modems would still be rather common.

Mobile phones would be very popular. Their computational specs would be rather low, but most of them would still be able to access Internet services and run downloadable third-party applications. Neither of these requires a lot of power -- in fact, every microprocessor is designed to run custom code to begin with. Very few phones would have built-in cameras, however -- the development of cheap and tiny digital camera cells has a lot to do with Moore's law. Also, global digital divide would be greater -- there wouldn't be extremely cheap handsets available in poor countries.

It must be emphasized here that even though IC feature sizes would be in the "1996 level", we wouldn't be building devices from the familiar 1996 components. The designs would be far more advanced and logic-efficient. Hardware milestones would have been more like "reinventing the wheel" than accumulating as much intellectual property as possible on a single chip. RISC and Transputer architectures would have displaced X86-like CISCs a long time ago and perhaps even given way to ingenious inventions we can't even imagine.

Affordable 3D printers would be just around the corner, just like in the real world. Their developmental bottlenecks have more to do with the material printing process itself than anything Moorean. Similarly, the setbacks in the progress of virtual reality helmets have more to do with optics and head-tracking sensors than semiconductors.

2. People would be more conscious about the use of computing resources.

As mentioned before, digital storage would be far less abundant than in the real world. Online services would still have tight per-user disk quotas and many users would be willing to actually pay for more space. Even laypeople would have a rather good grasp about kilobytes and megabytes and would often put effort in choosing efficient storage formats. All computer users would need to regularly choose what is worth keeping and what isn't. Online privacy would generally be better, as it would be prohibitively expensive for service providers to neurotically keep the complete track record of every user.

As global Internet backbones would have considerably slower capacities than local and mid-range networks, users would actually care about where each server is geographically located. Decentralized systems such as IRC and Usenet would therefore never have given way to centralized services. Search engines would be technically more similar to YacY than Google, social media more similar to Diaspora than Facebook. Even the equivalent of Wikipedia would be a network of thousands of servers -- a centralized site would have ended up being killed by deletionists. Big businesses would be embracing this "peer-to-peer" world instead of expanding their own server farms.

In general, Internet culture would be more decentralized, ephemeral and realtime than in the real world. Live broadcasts would be more common than vlogs or podcasts. Much less data would be permanently stored, so people would have relatively small digital footprints. Big companies would have far less power over users.

Attitudes towards software development would be quite different, especially in regards to efficiency and optimization. In the real world, wasteful use of computational resources is systematically overlooked because "no one will notice the problem in the future anyway". As a result, we have incredibly powerful computers whose software still suffers from mainframe-era problems such as ridiculously high UI latencies. In a Slow-Moore world, such problems would have been solved a long time ago: after all, all you need is a good user-level control to how the operating system priorizes different pieces of code and data, and some will to use it.

Another problem in real-world software development is the accumulation of abstraction layers. Abstraction is often useful during development, as it speeds up the process and simplifies maintenance, but most of the resulting dependencies are a completely waste of resources in the final product. A lot of this waste could be eliminated automatically by the use of advanced static analysis and other methods. From the vast contrast between carefully size-optimized hobbyist hacks and bloated mainstream software we might guess that some mind-boggling optimization ratios could be reached. However, the use and development of such tools has been seriously lagging behind because of the attitude problems caused by Moore's law.

In a Slow-Moore world, the use of computing resources would be extremely efficient compared to current standards. This wouldn't mean that hand-coded assembly would be particularly common, however. Instead, we would have something like "hack libraries": huge collections of efficient solutions for various problems, from low-level to high-level, from specific to generic. All tamed, tested and proven in their respective parameter ranges. Software development tools would have intelligent pattern-matchers that would find efficient hacks from these libraries, bolt them together in optimal arrangements and even optimize the bolts away. Hobbyists and professionals alike would be competing in finding ever smarter hacks and algorithms to include in the "wisdombase", thus making all software incrementally more resource-efficient.

3. There would still be a gap between digital and "real" content.

Regardless of how efficently hardware resources are used, unbreakable limits always exist. In a Slow-Moore world, for instance, film photography would still be superior in quality to digital photography. Also, since the digital culture would be far more resource-conscious, large resolutions wouldn't even be desirable in purely digital contexts.

Spreading "memes" as bitmap images is a central piece of today's Internet culture. Even snippets of on-line discussions get spread as bitmapped screenshots. Wasteful, yes, but compatible and therefore tolerable. The Slow-Moore Internet would probably be much more compatible with low-bit formats such as plaintext or vector and character graphics.

Since the beginning of digital culture, there has been a desire to import content from "meatspace" into the digital world. At first, people did it in laborous ways: books were typed into text files, paintings and photographs were repainted with graphics editors, songs were covered with tracker programs. Later, automatic methods appeared: pictures could be scanned, songs could be recorded and compressed into MP3-like formats. However, it took some time before straight automatic imports could compete against skillful manual effort. In low resolutions, skillful pixel-pushing still makes a difference. Synthesized songs take a fraction of the space of an equivalent MP3 recording. Eventually, the difference diminished, and no one longer cared about it.

In a Slow-Moore world, the timeline of digital media would have been vastly different. A-priori-digital content would still have vast advantages over imported media. Artists looking for worldwide appreciation via the Internet would often choose to take the effort to learn born-digital methods instead of just digitizing their analog works. As a result, many traditional disciplines of computer art would have grown enormous. Demoscene and low-bit techniques such as procedural content generation and tracker-like synthesized music would be the mainstream norm in the Internet culture instead of anything "underground".

Small steps towards photorealism and higher fidelity would still be able to impress large audiences, as they would still notice the difference. However, in a resource-conscious online culture, there would also probably be a strong countercultural movement against "high-bit" -- a movement seeking to embrace the established "Internet esthetics" instead of letting it be taken over and marginalized by imports.

Record and film companies would definitely be suing people for importing, covering and spreading their copyrighted material. However, they would still be able to sell it in physical formats because of their superior quality. There would also be a class of snobs who hate all "computer art" and all the related esthetic while preferring "real, physical formats".

4. Conclusion

A Slow-Moore world would be somewhat "backwards" in some respects but far more sensible or even more advanced in others. As a demoscener with an ever-growing conflict against today's industry-standard attitudes, I would probably prefer to live with a more moderate level of Moorean inflation. However, a Netflix fan who likes high-quality digital photography and doesn't mind being in surveillance would probably choose otherwise.

The point in my thought experiment was to justify my view that the idea of a linear tech tree strongly tied to Moore's law is a banal oversimplification. There are many other dimensions that need to be noticed as well.

The alternative timeline may also be used as inspiration for real-world projects. I would definitely like to see whether an aggressively optimizing code generation tool based on "hack libraries" could be feasible. I would also like to see the advent of a mainstream operating system that doesn't suck.

Nevertheless: Down with Moore's law fetishism! It's time for a more mature technological vision!

Saturday, 5 January 2013

I founded a new "oldschool" computer magazine.

Maybe it's a sensible time to tell a bit what I've been up to for the past few months.

In September 2012, I founded Skrolli, a new Finnish computer magazine. This turn in my life surprised even myself.

It started from an image that went viral. Produced by my friend CCR with a lot of ideas from me, it was a faux magazine cover speculating what the longest-living Finnish home computing magazine, MikroBitti, would be like today if it had never renewed itself after the eighties. The magazine happens to be somewhat iconic to those Finns who got immersed to computing before the turn of the millennium, so it reached some relevant audience quite efficiently.

The faux cover was meant to be a joke, but the abundance of comments like "I would definitely subscribe to this kind of magazine" made me seriously consider the possibility of actually creating something like it. I put up a simple web page stating the idea of a new "countercultural" computer magazine that is somewhat similar to what MikroBitti used to be like. In just a few days, over a hundred people showed up on the dedicated IRC channel, and here we are.

Bringing the concept of an oldschool microcomputer magazine to the present era needs some thoughtful reflection. The world has changed a lot; computer hobbyists no longer exist as a unified group, for example. Everyone uses a computer for leisure, and it is sometimes difficult to draw line between those who are interested in the applications and those who are genuinely interested in the technology. Different activities also have their own subcultures with their own communication channels, and it is often hard to relate to someone whose subculture has a very different basis.

Skrolli defines computer culture as something where the computational aspects are irreducible. It is possible to create visual art or music completely without digital technology, for example, but once the computer becomes the very material (like in case of pixel art or chip music), the creative activity becomes relevant to our magazine. Everything where programming or other direct access to the computational mechanisms is involved is also relevant, of course.

I also chose to target the magazine to my own language group. In a nation of six million, the various subcultures are closer to one another, so it is easier to build a common project that spans the whole scale. The continuing existence of large computer hobbyist events in this country might also simplify the task. If the magazine had been started in English or even German, there would have been a much greater risk of appealing only to a few specialized niches.

In order to keep myself motivated, I have been considering the possibility that Skrolli will actually start a new movement. Something that brings the computational aspects of computer entuhsiasm back to daylight and helps the younger generation to find a true, non-compromising relationship with digital technology. Once the movement starts growing on its own, without being tied to a single project, language barriers will no longer exist for it.

I will be busy with this stuff for at least a couple of months until we get the first few issues printed (yes, it will be primarily a paper magazine as a statement against short-living journalism). After that, it is somewhat likely that I will finish the projects I temporarily abandoned: there will probably be a JIT-enabled version IBNIZ, and the IBNIZ democoding contest I promised will be arranged. Stay tuned!