felix stalder on open source
intelligence at first monday and Sarah Lai Stirland at the Village Voice
on 'other people's property ----------------------- http://firstmonday.org/
issues/issue7_6/stalder/index.html The Open Source movement has established
over the last decade a new collaborative approach, uniquely adapted to
the Internet, to developing high-quality informational products. Initially,
its exclusive application was the development of software (GNU/Linux and
Apache are among the most prominent projects), but increasingly we can
observe this collaborative approach being applied to areas beyond the coding
of software. One such area is the collaborative gathering and analysis
of information, a practice we term "Open Source Intelligence". In this
article, we use three case studies - the nettime mailing list, the Wikipedia
project and the NoLogo Web site - to show some the breadth of contexts
and analyze the variety of socio-technical approaches that make up this
emerging phenomenon. Contents Open Source Collaborative Principles A Few
Examples of Open Source Intelligence The Future of OS-INT In the world
of secret services, Open Source Intelligence (OS-INT) means useful information
gleaned from public sources, such as scientific articles, newspapers, phone
books and price lists. We use the term differently. In the followings OS-INT
means the application of collaborative principles developed by the Open
Source Software movement [1] to the gathering and analysis of information.
These principles include: peer review, reputation- rather than sanctions-based
authority, the free sharing of products, and flexible levels of involvement
and responsibility. Like much on the Internet in general, including the
Open Source Software movement, practice preceded theory also in the case
of OS-INT. Many of the Internet's core technologies were created to facilitate
free and easy information sharing among peers. This always included two-way
and multicast communication so that information could not only be distributed
efficiently, but also evaluated collaboratively. E-mail lists - the most
simple of all OS-INT platforms - have been around since the mid 1970s [2].
In the 1980s, bulletin boards, FidoNet and Usenet provided user-driven
OS-INT platforms with more sophisticated and specialized functionality.
In the 1990s, many of these platforms were overshadowed by the emergence
of the Word Wide Web. Tim Berners-Lee's foundational work on Web standards
was guided by a vision of peer collaboration among scientists distributed
across the globe [3]. While OS-INT's precedents reach back through the
history of the Internet - and if one were to include peer-reviewed academic
publishing, much beyond that - a series of recent events warrant that it
be considered a distinct phenomenon that is slowly finding its own identity,
maturing from a practice "in itself" to one "for itself." The culture of
the Internet as a whole has been changing. The spirit of free sharing that
characterized the early days is increasingly being challenged by commodity-oriented
control structures which have traditionally dominated the content industries.
At this point, insatead of being the norm, free sharing of information
is becoming the exception, in part because the regulatory landscape is
changing. The extension of copyrights and increasingly harsh prosecution
of violations are attempts to criminalize early Net culture in order to
shore up the commodity model, which is encountering serious difficulties
in the digital environment [4]. In other areas, years of experience with
the rise and fall of "proto-OS-INT" forums has accumulated to become a
kind of connective social-learning process. Uncounted e-mail lists went
through boom and bust cycles, large numbers of newsgroups flourished and
then fell apart due to pressures from anti-social behavior. Spam became
a problem. Endless discussions raged about censorship imposed by forum
moderators, controversial debates erupted about ownership of forums (is
it the users or the providers?), difficulties were encountered when attempting
to reach any binding consensus in fluctuating, loosely integrated groups.
The condensed outcome of these experiences is a realization that a sustainable,
open and collaborative practice is difficult to achieve and that new specialized
approaches must be developed in order to sustain the fine balance between
openness and a healthy signal/noise ratio. In other words, self-organization
needs some help. The emerging field of OSI-INT is made up of numerous,
independent projects. Each of them, such as the Nettime e-mail list, Wikipedia
and the NoLogo.org Web site which will be discussed in the following, has
a distinct history that led them to develop different technical and social
strategies, in order to realize some or all of the open source collaborative
principles. Open Source Collaborative Principles One of the early precedents
of open source intelligence is the process of academic peer review. As
academia established a long time ago, in the absence of fixed and absolute
authorities, knowledge has to be established through the tentative process
of consensus building. At the core of this process is peer review, the
practice of peers evaluating each other's work, rather than relying on
external judges. The specifics of the reviewing process are variable, depending
on the discipline, but the basic principle is universal. Consensus cannot
be imposed, it has to be reached. Dissenting voices cannot be silenced,
except through the arduous process of social stigmatization. Of course,
not all peers are really equal, not all voices carry the same weight. The
opinions of those people to whom high reputation has been assigned by their
peers carry more weight. Since reputation must be accumulated over time,
these authoritative voices tend to come from established members of the
group. This gives the practice of peer review an inherently conservative
tendency, particularly when access to the peer group is strictly policed,
as it is in academia, where diplomas and appointments are necessary to
enter the elite circle. The point is that the authority held by some members
of the group - which can, at times, distort the consensus-building process
- is attributed to them by the group, therefore it cannot be maintained
(easily) against the will of the other group members. If we follow Max
Weber's definition that power is the ability to "impose one's will upon
the behavior of other persons," [5] this significantly limits the degree
to which established members can yield power. Eric Raymond had the same
limitations in mind when he noted that open source projects are often run
by "benevolent dictators" [6]. They are not benevolent because the people
are somehow better, but because their leadership is based almost exclusively
on their ability to convince others to follow. Thus the means of coercion
are very limited. Hence, a dictator who is no longer benevolent, i.e. who
alienates his or her followers, loses the ability to dictate. The ability
to coerce is limited, not only because authority is reputation-based, but
also because the products that are built through a collaborative process
are available to all members of the group. Resources do not accumulate
with the elite. Therefore, abandoning the leader and developing the project
in a different direction - known as "forking" in the Open Source Software
movement - is relatively easy and always a threat to the established players.
The free sharing of the products produced by the collaboration among all
collaborators - both in their intermediary and final forms - ensures that
that there are no "monopolies of knowledge" that would increase the possibility
of coercion. The free sharing of information has nothing to do with altruism
or a specific anti-authoritarian social vision. It is motivated by the
fact that in a complex collaborative process, it is effectively impossible
to differentiate between the "raw material" that goes into a creative process
and the "product" that comes out. Even the greatest innovators stand on
the shoulders of giants. All new creations are built on previous creations
and provide inspiration for future ones. The ability to freely use and
refine those previous creations increases the possibilities for future
creativity. Lawrence Lessig calls this an "innovation commons," and cites
its existence as one of the major reasons why the Internet as a whole developed
so rapidly and innovatively [7]. It is also important to note that an often
overlooked characteristic of open source collaboration is the flexible
degree of involvement in and responsibility for the process that can be
accommodated. The hurdle to participating in a project is extremely low.
Valuable contributions can be as small as a single, one-time effort - a
bug report, a penetrating comment in a discussion. Equally important, though,
is the fact that contributions are not limited to just that. Many projects
also have dedicated, full-time, often paid contributors who maintain core
aspects of the system - such as maintainers of the kernel, or editors of
a slash site. Between these two extremes - one-time contribution and full-time
dedication - all degrees of involvement are possible and useful. It is
also easy to slide up or down the scale of commitment. Consequently, dedicated
people assume responsibility when they invest time in the project, and
lose it when they cease to be fully immersed. Hierarchies are fluid and
merit-based, however and whatever merit means to the peers. This also makes
it difficult for established members to continue to hold onto their positions
when they stop making valuable contributions. In volunteer organizations,
this is often a major problem, as early contributors sometimes try to base
their influence on old contributions, rather than letting the organizations
change and develop. None of these principles were "invented" by the Open
Source Software movement. However, they were updated to work on the Internet
and fused into a coherent whole in which each principle reinforces the
other in a positive manner. The conservative tendencies of peer review
are counter-balanced with relatively open access to the peer group: a major
difference from academia, for instance. Most importantly, the practice
of Open Source has proved that these principles are a sound basis for the
development of high-end content that can compete with the products produced
by commodity-oriented control structures [8]. A Few Examples of Open Source
Intelligence < nettime > Nettime is an e-mail list founded in the summer
of 1995 by a group of cultural producers and media activists during a meeting
at the Venice Biennale. As its homepage states, the list focuses on "networked
cultures, politics, and tactics" [9]. Its actual content is almost entirely
driven by members' submissions. It is a good example of true many-to-many
communication. Nettime calls its own practice "collaborative text filtering."
The filter is the list itself - or to be more precise, the cognitive capacities
of the people on the list. The list consists of peers with equal ability
- though not necessarily interest - to read and write. The practice of
peer review takes place on the list and in real time. The list serves as
an early warning system for the community, a discussion board for forwarded
texts as well as a sizeable amount of original writing, and, equally importantly,
an alternative media channel. This last function became most prominent
during the war against Yugoslavia, when many of members living in the region
published their experiences of being on the receiving end of not-so-smart,
not-so-precise bombs. By March 2002, the number of subscribers had grown
to 2,500. The number of people who read nettime posts, however, is higher
than the number of subscribers to the list. Nettime maintains a public
Web-based archive that is viewed extensively, and some of the subscriber
addresses are lists themselves. Also, as a high-reputation list, many of
the posts get forwarded by individual subscribers to more specialized lists
(another kind of collaborative text filtering), in addition to being published
in print and other electronic media. The majority of subscribers come from
Western Europe and North America, but the number of members from other
regions is quite sizeable [10]. Over the years, autonomous lists have been
spun off in other languages: Dutch, Romanian, Spanish/Portuguese, French
and Mandarin. A Japanese list is currently in preparation. Despite its
growth and diversity, nettime has retained a high degree of coherent culture
and developed an original of technology-savvy, leftist media critique,
stressing the importance of culture and social aspects of technology, as
well as the importance of art, experimentation and hands-on involvement.
This flexible coherence has been strengthened through a series of real-life
projects, such as paper publications including a full-scale anthology [11],
and a string of conferences and "nettime-meetings" in Europe during the
1990s. Since its inception, the list has been running on majordomo, a then
popular open source e-mail list package, and assorted hypermail and mhonarc
based Web archives. Technically, the list has undergone little development.
Initially, for almost three years, the list was open and unmoderated, reflecting
the close-knit relationships of its small circle of subscribers and the
still "clubby" atmosphere of netculture. However, after spam and flame
wars became rampant, and the deteriorating signal/noise ratio began to
threaten the list's viability, moderation was introduced. In majordomo,
moderation means that all posts go into a queue and the moderators - called
"list-owners," an unfortunate terminology - decide which posts get put
though to the list, and which are deleted. This technological set-up makes
the moderation process opaque and centralized. The many list members cannot
see which posts have not been approved by the few moderators. Understandably,
in the case of nettime, this has led to a great deal of discussion about
censorship and "power grabbing" moderators. The discussion was particularly
acrimonious in the case of traffic-heavy ASCII-art and spam-art that can
either be seen as creative experimentation with the medium, or as destructive
flooding of a discursive space. Deleting commercial spam, however, was
universally favored. In order to make the process of moderation more transparent,
an additional list was introduced in February 2000, nettime-bold. This
channel has been carrying all posts that go into the queue prior to moderators'
evaluation. Because this list is also archived on the Web, members can
view for themselves the difference between what was sent to the list and
what was approved by the moderators. In addition to increasing the list's
transparency, having access to the entire feed of posts created the option
for members to implement parallel but alternative moderation criteria.
In practice, however, this has not yet occurred. Nevertheless, giving members
this option has transformed the status of the moderators from being the
exclusive decision makers to "trusted filters." It has also provided the
possibility for forking (i.e. the list splitting into two differently moderated
forums). Nettime is entirely run by volunteers. Time and resources are
donated. The products of nettime are freely available to members and non-members
alike. Even the paper publications are available in their entirety in the
nettime archives [12]. Reflecting its history and also the diversity of
its contributors and submissions, nettime has maintained the rule that
"you own your own words." Authors decide how to handle redistribution of
their own texts, though to be frank, it is hard to have control over a
text's after-life once it has been distributed to 2,500 addresses and archived
on the Web. Despite its many advantages - ease of use, low technical requirements
for participating, direct delivery of the messages into members' inboxes
- the format of the e-mail list is clearly limited when it comes to collaborative
knowledge creation. Moderation is essential once a list reaches a certain
diversity and recognition, but the options for how to effect this moderation
are highly constrained. Nettime's solution - establishing an additional
unmoderated channel - has not essentially changed the fact that there is
a very strict hierarchy between moderators and subscribers. While involvement
is flexible (ranging from lurkers to frequent contributors) the responsibility
is inflexibly restricted to the two fixed social roles enabled by the software
(subscriber and moderator). The additional channel has also not changed
the binary moderation options: approval or deletion. The social capacities
built into the e-mail list software remain relatively primitive, and so
are the options for OS-INT projects using this platform. < wikipedia.com
> Wikipedia is a spin-off of Nupedia. Nupedia - the name is a combination
of GNU and encyclopedia - is a project to create an authoritative encyclopedia
inspired, and morally supported, by Richard Stallman's GNU project [13].
However, apart from being published under an open license, Nupedia's structure
is similar to the traditional editorial process. Experts write articles
that are reviewed by a board of expert editors (with some public input
via the "article in progress" section) before being finalized, approved,
and published. Once published, the articles are finished. Given the extensive
process, it's not surprising that the project has been developing at a
glacial pace. Wikipedia was started in early 2001 as an attempt to create
something similar - a free encyclopedia that would ultimately be able to
compete with the Encyclopaedia Britannica - but it was developed via a
very different, much more open process. The two projects are related but
independent - Nupedia links to articles on Wikipedia if it has no entries
for a keyword, and some people contribute to both projects, but most don't.
The project's technological platform is called Wikiweb, named after the
Hawaiian word wikiwiki, which means fast [14]. The original software was
written in 1994 but recently rewritten to better handle the rapidly growing
size and volume of Wikipedia. The Wiki platform incorporates one of Berners-Lee's
original concepts for the Web: to let people not only see the source code,
but also freely edit the content of pages they view. In the footer of most
Wikipages is the option to "Edit this page," which gives the user access
to a simple form that allows them to change the displayed page's content.
The changes become effective immediately, without being reviewed by a board
or even the original author. Each page also has a "history" function that
allows users to review the changes and, if necessary, revert to an older
version of the page. In this system, writing and editing are collective
and cumulative. A reader who sees a mistake or omission in an article can
immediately correct it or add the missing information. Following the open
source peer-review maxim, formulated by Eric Raymond as "given enough eyeballs,
all bugs are shallow," this allows the project to grow not only in number
of articles, but also in terms of the articles' depth, which should improve
over time through the collective input of knowledgeable readers. Since
the review and improvement process is public and ongoing, there is no difference
between beta and release versions of the information (as there is in Nupedia).
Texts continuously change. Peer-review becomes peer-editing, resulting
in what Larry Sanger, one of the original project leaders, hailed as the
"most promiscuous form of publishing." At least as far as its growth is
concerned, the project has been very successful. It passed 1,000 pages
around February 12, 2001, and 10,000 articles around September 7, 2001.
In its first year of existence, over 20,000 encyclopedia entries were created
- that's a rate of over 1,500 articles per month. By the end of March 2002,
the number of articles had grown to over 27,000. The quality of the articles
is a different matter and difficult to judge in a general manner. Casual
searching brings up some articles that are in very good shape and many
that aren't. Of course, this is not surprising given the given the fact
that the project is still very young. Many of the articles function more
as invitations for input than as useful reference sources. For the moment,
many texts have an "undergraduate" feel to them, which may be appropriate,
since the project just finished its "first year." However, it remains to
be seen if the project will ever graduate. Both Nupedia and Wikipedia have
been supported by Jimbo Wales, CEO of the San Diego-based search engine
company Bomis, who has donated server space and bandwidth to the project.
The code-base was rewritten by a student at the University of Cologne,
Germany, and for a bit more than one year, Larry Sanger held a full-time
position (via Bomis) as editor-in-chief of Nupedia and chief organizer
at Wikipedia. In January 2002, funding ran out and Larry resigned. He now
contributes as a volunteer. There are currently close to 1,200 registered
users, but since it's possible to contribute anonymously, and quite a few
people do, the actual number of contributors is most likely higher. Wikipedia
has not suffered from the resignation of its only paid contributor. It
seems that it has reached, at least for the moment, the critical mass necessary
to remain vibrant. Since anyone can read and write, the paid editor did
not have any special status. His contributions were primarily cognitive,
because he had more time than anyone else did to edit articles and write
initial editing rules and FAQ files. His influence was entirely reputation-based.
He could, and did, motivate people, but he could not force anyone to do
anything against their will. The products of this encyclopedia are freely
available to anyone. The texts are published under the GNU Free Document
license [15]. This states that the texts can be copied and modified for
any purpose, as long as the original source is credited and the resulting
text is published under the same license. Not only the individual texts
are available, the entire project - including its platform - can be downloaded
as a single file for mirroring, viewing offline, or any other use. Effectively,
not even the system administrator can control the project. The scale of
people's involvement in the project is highly flexible, ranging from the
simple reader who corrects a minor mistake, to the author who maintains
a lengthy entry, to the editor who continuously improves other people's
entries. These roles depend entirely on each contributor's commitment,
and are not pre-configured in the software. Everyone has the same editing
capabilities. So far, the project has suffered little from the kind of
vandalism that one might expect to occur given its open editing capabilities.
There are several reasons for this. On the one hand, authors and contributors
who have put effort into creating an entry have a vested interest in maintaining
and improving the resource, and due to the "change history" function, individual
pages can be restored relatively easily. The latest version of the platform
has an added feature that can send out alerts to people who request them
whenever a specific page has been changed. The other reason is that the
project still has a "community" character to it, so there seems to be a
certain shared feeling that it is a valuable resource and needs to be maintained
properly. Finally, in case of read differences over content, it's often
easier to create a new entry rather than to fight over an existing one.
This is one of the great advantages of having infinite space. So far, self-regulation
works quite well. It remains to be seen how long the current rate of growth
can be sustained, and if it really translates into an improvement over
the quality of the individual encyclopedia entries. So far, the prospects
look good, but there are very few examples of the long-term dynamics of
such open projects. Given the fact that its stated competitor, the Encyclopaedia
Britannica, has been publishing since 1768, long term development is clearly
essential to such a project. < NoLogo.org > NoLogo.org is perhaps the
most prominent second-generation slash site. This makes it a good example
of how the OS-INT experience, embodied by a specific code, is now at a
stage where it can be replicated across different contexts with relative
ease. NoLogo.org is based on the current, stable release of Slashcode,
an open source software platform released under the GPL, and developed
for and by the Slashdot community. Slashdot is the most well-known and
obvious example of OS-INT, since it is one of the main news and discussion
sites for the open source movement. Of particular importance for OS-INT
is the collaborative moderation process supported by the code. Users who
contribute good stories or comments on stories are rewarded with "karma,"
which is essentially a point system that enables people to build up their
reputation. Once a user has accumulated a certain number of points, she
can assume more responsibilities, and is even trusted to moderate other
people's comments. Points do have a half-life however. If a user stops
contributing, their privileges expire. Each comment can be assigned points
by several different moderators, and the final grade (from -1 to +5) is
an average of all the moderators' judgments. A good contribution is one
that receives high grades from multiple moderators. This creates a kind
of double peer-review process. The first is the content of the discussion
itself where people respond to one another, and the second is the unique
ranking of each contribution. This approach to moderation addresses very
elegantly several problems that bedevil e-mail lists. First, the moderation
process is collaborative. No individual moderator can impose his or her
preferences. Second, moderation means ranking, rather than deleting. Even
comments ranked -1 can still be read. Third, users set their preferences
individually, rather than allowing a moderator to set them for everyone.
Some might enjoy the strange worlds of -1 comments, whereas others might
only want to read the select few that garnered +5 rankings. Finally, involvement
is reputation- (i.e. karma-) based and flexible. Since moderation is collaborative,
it's possible to give out moderation privileges automatically. Moderators
have very limited control over the system. As an additional layer of feedback,
moderators who have accumulated even more points through consistently good
work can "meta-moderate," or rank the other moderators. The social potential
embodied in Slashcode was available when Naomi Klein's January 2000 book
No Logo: Taking Aim at the Brand Bullies became a sudden international
best-seller. In the wake of the anti-globalization protests in Seattle
in November 1999, and after, the book began to sell in the 10,000s and
later 100,000s. She found herself caught in a clash of old and new media
and facing a peculiar problem. A book is a highly hierarchical and centralized
form of communication - there is only one single author, and a very large
number of readers. It is centralized because users form a relationship
with the author, while typically remaining isolated from one another. This
imbalance of the broadcast model is usually not a problem, since readers
lack efficient feedback channels. However, today many readers have e-mail
and began to find Naomi's e-mail address on the Web. She started receiving
e-mails en masse, asking for comments, advice, and information. There was
no way she could take all these e-mails seriously and respond to them properly.
The imbalance between the needs of the audience and the capacities of the
author were just too great, particularly since Naomi had no interest in
styling herself as the leader or guru of the anti-globalization movement.
(Of course that didn't stop the mass media from doing so without her consent.)
As she explains the idea behind the Nologo.org: "Mostly, we wanted a place
where readers and researchers interested in these issues could talk directly
to one another, rather than going through me. We also wanted to challenge
the absurd media perception that I am "the voice of the movement," and
instead provide a small glimpse of the range of campaigns, issues and organizations
that make up this powerful activist network - powerful precisely because
it insistently repels all attempts to force it into a traditional hierarchy"
[16]. The book, which touched a nerve for many people, created a global,
distributed y"communityy" of isolated readers. The book provided a focus,
but nowhere to go except to the author. The Slashcode-based Web site provided
a readily available platform for the readers to become visible to one another
and break through the isolation created by the book. The book and the OS-INT
platform are complementary. The book is a momentary and personal solidification
of a very fluid and heterogeneous movement. The coherent analysis that
the traditional author can produce still has a lot of value. The OS-INT
platform, on the other hand, is a reflection of the dynamic multiplicity
of the movement, a way to give back something to the readers (and others)
and a connective learning process. More than the book, Nologo.org fuses
action with reflection. Of course, all the problems that are traditionally
associated with public forums are still there, dissent - at times vitriolic
and destructive - is voiced, but the moderating system allows members of
the group to deal with differences in opinion in ways that do not impede
the vitality of the forum. The learning process of Slashdot, in terms of
to how to deal with these issues, benefited NoLogo significantly. Within
the first year, 3,000 users registered on the site which serves requests
of some 1,500 individual visitors per day. The Future of OS-INT As a distinct
practice, Open Source Intelligence is still quite young and faces a few
challenges. First, there is the issue of scale. Compared to traditional
broadcast media, OS-INT projects are still very small (with the exception
of slashdot, which has about half a million registered users) [17]. Since
scale and exposure significantly affect the social dynamics, growth might
not come easily for many projects. Second, there is an issue of economics.
Most OSI-INT projects are pure volunteer projects. Resources are donated.
Wikipedia, for example, depends on Bomis Inc. for hardware and bandwidth.
NoLogo.org is financed through royalties from book sales. Most OS-INT project
have not yet produced any revenue to cover some of the inevitable costs.
So far, they have quite successfully relied on donations (from sympathetic
individuals, corporations or foundations), but prolonged crisis of the
Internet economy does not necessarily make it easier to raise funds, which
becomes more important as the projects grow in size and the infrastructure/bandwidth
needs increase. Compared to traditional production and publishing models,
OS-INT projects take part to a large degree outside the traditional monetary
economy. Contributors, by and large, are not motivated by immediate financial
gain. However, not all resources can be secured without money, so new and
creative models of financing such projects need to be found. Slashdot,
for example, which could rely for a long time on advertisement as a main
revenue source, recently had to increase the size of banners in order to
keep up with costs. However, it gave users the possibility to access the
site without advertisement - in exchange for a small subscription fee.
It is likely that OSI-INT projects, from an economic point of view, will
develop into a hybrid involving direct revenues (e.g. subscription, advertisement),
goodwill donations and volunteer efforts. How these different elements
will relate to one another will change from project to project. There is
a lot of room - and need - for creative experiments. Despite these challenges,
there are good reasons to be optimistic about its future. First, the socio-technological
learning process is deepening. The platforms and practices of OS-INT are
becoming better understood, and consequently the hurdles for users as well
as providers are getting lower. On the users' side, the experience of learning
how to deal with participatory, rather than broadcast media is growing.
Their distinct character is being developed, mastered and appreciated.
For providers, the learning experience of OS-INT is embedded in sophisticated,
freely available GPL software. The start-up costs for new projects are
minimal, and possibilities for adapting the platform to the idiosyncratic
needs of each project are maximized. The resulting diversity, in turn,
enriches the connective learning process. Second, as the mass media converges
into an ever smaller number of (cross-industrial) conglomerates, which
relentlessly promote and control their multitude of media products, the
need for alternative information channels rises, at least among people
who invest time and cognitive energy into being critically informed. Given
the economics of advertisement-driven mass media, it is clear that the
possibilities of an "alternative newspaper" is rather limited. OS-INT platforms,
by distributing labor throughout the community, offer the possibility of
reaching a wider audience without being subject to the same economic pressures
that broadcast and print media face to deliver those audiences to advertisers,
particularly considering the fact that paid subscriptions allow access
to advertisement-free content. The more homogenous the mainstream media
becomes, the more room opens up for alternatives. And if these alternatives
are to be viable, then they must not be limited to alternative content,
but must also explore the structure of their production. This is the promise
and potential of OS-INT. The range of technologies are as wide as the range
of communities, and a close relationship exists between the two. Technologies
open and close possibilities in the same sense that social communities
do. As Lawrence Lessig pointed out, what code is to the online world, architecture
is to the physical world [18]. The way we live and the structures in which
we live are deeply related. The culture of technology increasingly becomes
the culture of our society. About the Authors The authors are associated
with some of the projects analyzed in this article. Felix Stalder is currently
one of the moderators of the nettime mailing list (nettime-l). Jesse Hirsh
is closely involved with Nologo.org. Acknowledgments An earlier version
of this paper was presented at the conference "Critical Upgrade: Reality
Check for Cyber Utopias" (Zagreb, 4-5 May 2002). Notes 1. We use the term
Open Source for its deliberate openness. Contrary to the more narrow term
Free Software, Open Source seems better suited to label a general collaborative
approach not limited to code. We acknowledge the historical and ideological
differences between the two concepts, but we believe that they are of limited
relevance in the context of the present argument. 2. http://www.zakon.org/robert/
internet/timeline/#1970s, accessed 25 March 2002. 3. Tim Berners-Lee with
Mark Fischetti, 1999. Weaving the Web: The Original Design and the Ultimate
Destiny of the World Wide Web by its Inventor. New York: HarperCollins
4. Lawrence Lessig, 2001. The Future of Ideas: The Fate of the Commons
in a Connected World. New York: Random House. 5. Max Weber, 1954. Max Weber
on Law in Economy and Society. Translated by Talcott Parsons. Cambridge,
Mass.: Harvard University Press 6. Eric Raymond, 2000. "Homesteading the
Noosphere," at http://www.tuxedo.org/~esr/writings/ cathedral-bazaar/homesteading/x349.html.
7. Lawrence Lessig (2001). 8. Often, but not always, these principles are
supported by licenses setting the legal parameters for what can, or cannot,
be done with the informational products governed by them. For an overview
of the different licenses, see the Open Source initiative's list of more
than 30 "approved licenses" at http://www.opensource.org/licenses. 9. http://www.nettime.org.
10. http://amsterdam.nettime.org/Lists-Archives/ nettime-l-0203/msg00080.html.
11. J. Bosma, P. Van Mourik Broekman, T. Byfield, M. Fuller, G. Lovink,
D. McCarty, P. Schultz, F. Stalder, M. Wark, and F. Wilding (editors),
1999. Readme! Ascii Culture and the Revenge of Knowledge. New York: Autonomedia.
12. http://www.nettime.org/pub.html. 13. http://www.gnu.org/ encyclopedia/free-encyclopedia.html.
14. http://www.wiki.org. 15. http://www.wikipedia.com/wiki/ GNU+Free+Documentation+License.
16. http://www.nologo.org/letter.shtml. 17. OS-INT projects take place
on the Internet hence they still cannot have the broad reach of traditional
broadcast media. 18. Lawrence Lessig, 1999. Code and Other Laws of Cyberspace.
New York: Basic Books. Editorial history Paper received 15 May 2002; revised
version received 20 May 2002; accepted 20 May 2002. Copyright ©2002,
First Monday Open Source Intelligence by Felix Stalder and Jesse Hirsh
First Monday, volume 7, number 6 (June 2002), URL: http://firstmonday.org/issues/
issue7_6/stalder/index.html --------------------------- Academics Square
Off Against Hollywood on Internet Content Other People's Property by Sarah
Lai Stirland bevy of legal minds are facing off against Hollywood over
the corporate control of Internet content. After hearings this fall, a
Supreme Court decision may determine the level of access Americans have
to a wide swath of their cultural heritage. This latest chapter of the
ongoing battle went public in January, when Stanford law professor Larry
Lessig wrote an op-ed in The Washington Post chastising entertainment conglomerates
for inhibiting the growth of high-speed, or broadband, Internet access.
Citing unchecked piracy and more competition, the industry prevents the
distribution of digital movies via the Internet, and as a result the U.S.
lags behind many other countries in the rollout of broadband. The following
month, Jack Valenti, chairman and CEO of the Motion Picture Association
of America, issued a rebuttal, again in the Post. According to Valenti,
Hollywood does want to work with Silicon Valley to enable the secure delivery
of digital movies online, but piracy threatens to undermine the very revenue
streams that make the financing of movies possible. Valenti went so far
as to claim the $35 billion film industry is "under siege" from a small
community of academics. The phrasing seemed especially curious shortly
thereafter, when Senate Commerce Chairman Fritz Hollings introduced the
Consumer Broadband and Digital Television Promotion Act, which would require
computer and consumer-products manufacturers to embed anti-copying technology
into their products. The entire industry may not be under siege, but one
of its wealthiest members—namely Disney—may be feeling a bit nervous. The
week before Valenti's editorial was published, the Supreme Court agreed
to hear Eldred v. Ashcroft, in which Lessig and a Harvard legal team charge
that the Sonny Bono Copyright Term Extension Act of 1998 is unconstitutional.
They allege that the act, which increases the term of copyright ownership
from 75 to 95 years, violates the "limited times" section of the copyright
clause in the Constitution. Since Mickey Mouse made his first public appearance
in 1928's Steamboat Willie, Disney's exclusive rights to their mascot were
set to expire next year, but now won't run out until 2023. Others approaching
their 75th birthday who were granted a stay include Pluto, Goofy, and Donald
Duck. Billions of dollars are at stake for Disney, since all of these characters
play starring roles in the company's theme parks, filmed entertainment,
and merchandising. For Lessig and his colleagues, the question is how to
apply U.S. constitutional law to the Internet; for Valenti and his cohorts,
it's how to make the Internet conform to the rules of every other entertainment
conduit. The academics maintain that neither copyright law nor the Constitution
have ever guaranteed authors and inventors complete, infinite control over
their creations. The industry, meanwhile, frames the debate as a matter
of intellectual property rights. "Property talk limits our imagination—it
is severely limited when influential figures such as Jack Valenti use the
word theft eight or nine times in a given speech, because it is impossible
to argue for theft," says cultural historian Siva Vaidhyanathan, author
of Copyrights and Copywrongs: The Rise of Intellectual Property and How
It Threatens Creativity. In a debate with Lessig at the University of Southern
California Annenberg School for Communication in Los Angeles last November,
Valenti stated, "Copyright is at the core of this country's creativity.
If it diminishes, or is exiled, or is shrunk, everyone who belongs to the
creative guilds, or is trying to get into the movie business, or is in
television, is putting their future to hazard." But what of the past? In
an amicus curiae brief supporting the petitioners in Eldred v. Ashcroft,
Berkeley law professor Mark Lemley quoted some troubling statistics: Only
20 percent of American films made in the 1920s still survive; for the 1910s,
the figure drops to 10 percent. Just 174 books out of 10,027 published
in 1930 remain in print. Digital archives could preserve access to this
material via the Internet, but the Bono act presents a massive stumbling
block to such efforts. Lessig and a group of academics from four other
universities propose the creation of an intellectual-property preserve—an
online environment that preserves cyberspace's culture of innovation. Which
is only possible, Lessig believes, if people can tinker with others' work
without having always to obtain permission first. Part of the effort, named
the Creative Commons, involves intellectual-property licenses that artists,
authors, and software programmers could use to label their work and make
clear under what conditions it may be re-used. Meanwhile, Rick Prelinger,
proprietor of Prelinger Archives in New York and San Francisco, has already
embarked on creating his own version of the national cultural park. His
stock-footage company holds more than 145,000 cans of ephemera from 1903
through the '80s: advertising, educational, industrial, and amateur films.
At his Internet Moving Images Archive (www.archive.org/movies), about a
thousand of these artifacts—all in the public domain—are available free
of charge and for re-use; since the site's debut 18 months ago, nearly
1 million have been downloaded. Brian Balogh, a history professor at the
University of Virginia, has designed a similar site for his course "Viewing
America: The United States From 1945 to the Present." To teach the class,
he uses a Web site—what he calls his "electronic sourcebook"—holding excerpts
from a dozen or so films, including a revealing Disney short commissioned
by the U.S. government during World War II called The Spirit of '43, which
promotes tax-paying as a patriotic duty. "I want my students to actually
experience what Americans experienced at the time," Balogh says. "It's
important because it gives them a more direct experience of history." Balogh
stresses the site's features that make his use of the material lawful:
multiple passwords, the use of brief excerpts only, and its strictly educational
purpose. "It would be nice if a site like this were available to the public—I
get requests all the time to make it available, but I can't take the password
protection down because then I'll be in violation of the current copyright
law," he says. Likewise, many films that Prelinger previously offered on
the Web have been ushered out of the public domain by the Bono act. "We
now have a great deal of post-1964 material that remains unusable except
to look at in-house," says Prelinger. Still, in what may come as a surprise
to Valenti, the project has actually increased Prelinger's business—the
archive effectively works as advertising for his offline enterprise. "We
see this project as an example of a new business model for providing access
to cultural property—the intellectual property preserve," Prelinger says.
"Its concept supports freedom of inquiry and freedom of expression." --------------------
she runs the birch bark book (or was it pottery?) store http://www.findarticles.com/cf_0/
m2342/1_33/58055909/p1/article.jhtml "Where the Maps Stopped": The Aesthetics
of Borders in Louise Erdrich's Love Medicine and Tracks. Author/s: Rita
Ferrari Issue: Spring, 1999 The language of margins and borders marks a
position of paradox: both inside and outside. - Linda Hutcheon (Poetics
66) In her novels Love Medicine and Tracks, Louise Erdrich engages the
paradox of employing and glorifying the oral tradition and its culturally
cohesive function by inscribing this tradition.(1) The text that simultaneously
asserts and denies the presence of voice makes explicit the paradoxical
presence and absence that is the condition of all language, of all texts
as they compose words to call forth a world. In Erdrich's work this paradox
plays itself out in representing a people, and their culture, who have
been unrepresented or represented in manipulative ways in the service of
a dominant group's ideology. Her work thus questions the politics of representation.
Erdrich's early novels, Love Medicine (1984, 1993) and The Beet Queen (1986),
have received the highest praise for their stylistic beauty and lyricism,
yet they also have been criticized for a lack of psychological depth and
inattention to the historical and political conditions of oppression suffered
by Native American characters. In her essay "The Silko-Erdrich Controversy,"
Susan Perez Castillo argues against such accusations, urging a more sophisticated
hermeneutical approach to Erdrich's texts. She emphasizes the importance
of attending to their silences and, following Brian McHale, to their postmodern
use of"representation itself to subvert representation, problematizing
and pluralizing the real" (292). In her reading of Tracks, Nancy J. Peterson
situates this subversion in terms of Erdrich's renegotiation of historical
discourses: "The new historicity that Tracks inscribes is neither a simple
return to historical realism nor a passive acceptance of postmodern historical
fictional ity. Tracks takes up the crucial issue of the referentiality
of historical narrative in a postmodern epoch and creates the possibility
for a new historicity by and for Native Americans to emerge" (991). For
instance, "the evocation of the oral in the written text implicates [a]
counterhistory in the historical narrative [constructed through documents]
that it seeks to displace" (985); "The documentary history of dispossession
that the novel uses and resists functions as an absent presence" (987).
Erdrich innovatively participates in "[w]riting history (as historical
novels and in other forms)," which, Peterson says, "has [...] become one
way for marginalized peoples to counter their invisibility" (983). Indeed,
the play of absence and presence imbues Erdrich's texts in multiple ways.
Perhaps most striking among these is precisely the inscription and thematization
of the invisible and the visible. In her texts, this inscription and thematization
acquire both negative and positive significances; invisibility signifies
cultural oppression but can also signify access to the transcendent when
invisibility inverts and expands into vision. Yet the significance of invisibility
and vision constantly shifts in Erdrich's novels according to the speaker
and the reader who situate themselves inside or outside of Native American
culture. In the fluidity of their meanings - their crossing the boundary
of definition - the concepts of invisibility and vision, along with concepts
of the inside and the outside, reflect the complexity of Erdrich's aesthetic
engagement of the idea of the border. In her novels about Native American
characters confined within and defined by the borders of a reservation
and t he boundaries of ethnic definition, Erdrich (who is herself part
Chippewa, part German American) uses the concept of the border as metaphor
and narrative strategy for a newly imagined negotiation of individual and
cultural identity. --------------------- http://itech.fgcu.edu/&/
issues/vol1/issue2/erdrich.htm Native American Humor: Powerful Medicine
in Louise Erdrich's Tracks by Leslie Gregory An old adage claims that laughter
is the best medicine to cure human ailments. Although this treatment might
sound somewhat unorthodox, its value as a remedy can be traced back to
ancient times when Hypocrites, in his medical treatise, stressed the importance
of ?a gay and cheerful mood on the part of the physician and patient fighting
disease? (Bakhtin 67). Aristotle viewed laughter as man?s quintessential
privilege: ?Of all living creatures only man is endowed with laughter?
(Bakhtin 68). In the Middle Ages, laughter was an integral part of folk
culture. ?Carnival festivities and the comic spectacles and ritual connected
with them had an important place in the life of medieval man? (Bakhtin
5). During the trauma and devastation of German bombing raids on London
during World War II, the stubborn resilience of British humor emerged to
sustain the spirit of the people and the courage of the nation. To laugh,
even in the face of death, is a compelling force in the human condition.
Humor, then, has a profound impact on the way human beings experience life.
In Louise Erdrich?s novel Tracks, humor provides powerful medicine as the
Chippewa tribe struggles for their physical, spiritual, and cultural survival
at the beginning of the twentieth century. While the ability to approach
life with a sense of humor is not unique to any one society, it is an intrinsic
quality of Native American life. ?There is, and always has been, humor
among Indians . . . ? (Lincoln 22). In deference to their history, this
can best be described as survival humor, one which ?transcends the void,
questions fatalism, and outlasts suffering? (Lincoln 45). Through their
capacity to draw common strength from shared humor, Native Americans demonstrate
how ?kinship interconnects comically . . . . [in] a kind of personal tribalism
that begins with two people, configurates around families, composes itself
in extended kin and clan, and ends up defining a culture? (Lincoln 63).
In Tracks, the power of Native American humor to profoundly affect human
experience is portrayed through the characters of Nanapush and Fleur. In
his role as ?Nanabush? the trickster, a central figure in Chippewa (Ojibwa)
storytelling, Nanapush demonstrates the power of Native American humor
in his own life, when he challenges the gods and cheats death by playing
a trick on them: ?During the year of the sickness, when I was the last
one left, I saved myself by starting a story . . . . I got well by talking.
Death could not get a word in edgewise, grew discouraged, and traveled
on? (Erdrich 46). The trickster figure is characterized as a man of many
guises, dualistic in nature?good and bad?and often considered quite a lover.
He is a survivor, physically and psychologically. As one who endures, he
transcends the temporal and functions as an affirmation of the self. The
trickster is also ?central to the tribe?s worldview,? with power that extends
beyond himself, guiding his people toward a view of themselves and of possibility
that they might not have seen otherwise (Ghezzi 444). To fulfill his role
as trickster, Nanapush uses humor as powerful medicine not only for himself,
but also for his tribe. Nanapush purposefully directs his own special brand
of humor?raucous bantering?at Margaret, guiding her away from her hardened
widow-view of life toward the possibility of a romantic relationship with
him. He goads her by boasting of his sexual prowess, to which she is less
than receptive. Nanapush describes her as ?headlong, bossy, scared of nobody
and full of vinegar? (Erdrich 47), while she calls him an ?old man . .
. . [with] two wrinkled berries and a twig.? When he replies, ?A twig can
grow,? Margaret retorts, ?But only in the spring? (Erdrich 48). Through
humor, each comes to view the other with new possibility. Out of their
bantering evolves a deeper, more meaningful relationship, one that binds
them together in strength, companionship, and love. Through a more subtle,
gentle humor, Nanapush guides Eli Kashpaw, who is like a son, toward a
successful romantic union with Fleur Pillager, a union that is both an
uninhibited celebration of life between two lovers and a symbol of hope
for the people of their tribe. When Eli pleads for advice on how to woo
Fleur, Nanapush imparts the humorous wisdom of a man who has had three
wives: ?I told him what he wanted to know. He asked me the old-time way
to make a woman love him and I went into detail so he should make no disgraceful
error? (Erdrich 45). He also gave him ?a few things from the French trunk
my third wife left . . . ? to help him in the courting process (Erdrich
45). Nanapush is pleased when he hears nothing more from Eli after he returns
to Fleur, interpreting this ?as a sign she [Fleur] liked the fan, the bead
leggings, and maybe the rest of Eli, the part where he was on his own?
(Erdrich 46). A powerful, sensuous relationship develops between Eli and
Fleur that provides solace to themselves and inspiration to their tribe
during a bitter winter, when there was no food and little hope, and the
people of the tribe chopped holes in Lake Matchimanito to fish. They ?stood
on the ice for hours, waiting, slapping themselves, with nothing to occupy
them but their hunger and their children?s hunger? (Erdrich 130). From
Fleur?s cabin across the frozen lake, the people could hear faint calls
?uncontained by the thick walls of the cabin. These cries were full of
pleasure, strange and wonderful to hear, sweet as the taste of last summer?s
fruit. Bundled in strips of blanket, coats stuffed with leaves and straw,?
they pushed the scarves away from their ears to hear the sounds of pleasure
that ?carried so well through the hollow air, even laughing whispers .
. .? Erdrich 130). The people listened ?until they heard the satisfaction
of silence. Then they turned away and crept back with hope. Faintly warmed,
they leaned down to gather in their icy line.? (Erdrich 130). The celebration
of life between two lovers, born from the humorous wisdom of Nanapush in
his advice to Eli, was transferred to the tribe as spiritual nourishment
and the possibility of hope. Nanapush unleashes an unmerciful humor on
Pauline, the tragic, self-tortured figure torn between her Chippewa heritage
and her desire to reject it, in order to guide her away from her path toward
self-destruction. Through his role as trickster, Nanapush tries to force
Pauline toward a new view of herself, one that will end her persistent
practice of self-mortification in rejection of her heritage and return
her to her place within the tribe. ?[W]hen Pauline has limited herself
to urinating only twice a day, Nanapush tells a ribald story, fills her
with tea, and tricks her into using the outhouse before she is supposed
to? (Towery 104). Sadly, Nanapush?s attempt at survivalist humor, which
can steer ?a neurotic from the shoals of self-torment,? fails with Pauline,
who chooses instead a path that leads her away from the kinship of the
tribe, as well as from the humor that could heal and save her (Lincoln
166). Through Fleur, Erdrich epitomizes the power of Native American humor
to ridicule fate and to transcend sorrow. When Fleur learns that she has
lost her land to a logging company, she devises a plan that allows her
to ?[alchemize] her suffering toward ironic perception and comic possibility?
(Lincoln 166). She will have the last, ironic laugh. If her beloved trees
must fall, she will not let them be felled by white men?s hands. During
the months that mark the logging company?s march of destruction through
the forest toward her cabin, Fleur uses a stolen axe and a stolen saw to
cut almost, but not completely, through the bases of the last remaining
stand of trees surrounding her cabin. When the loggers finally arrive at
her doorstep, she is ready for them. Fleur has alchemized her suffering
toward an act of defiance that will give her the strength she will need
to transcend the sorrow of her loss. A strong sense of uneasiness and foreboding
drives Nanapush to Fleur?s cabin. As he passes through the desecrated remnants
of the woods, he sees that all that remains is ?the square mile of towering
oaks, a circle around Fleur?s cabin? (Erdrich 220). When he reaches her
cabin, Fleur is standing at the front door, surrounded by wagons and logging
men, ?waiting for the signal, for the word, to take down the last of the
trees.? Nanapush expects to see sorrow and defeat on Fleur?s face, but
?[h]er face was warm with excitement and her look was chilling in its clear
amusement. She said nothing, just glanced into the sky and let her eyes
drop shut,? drawing silent strength from the ironic triumph of her secret
(Erdrich 222). Nanapush realizes what Fleur has done when ?along the edge
of the last high woods, a low breeze moaned out of the stumps? and he hears
the sound of the first tree crashing down beyond his sight (Erdrich 222).
As other trees fall, closer and closer to where the loggers are standing,
Fleur has ?bared her teeth in a wide smile that frightened even those who
did not understand the smiles of Pillagers? (Erdrich 223). A final gust
of wind topples the remaining trees, and they fall away from her cabin
?in a circle, pinning beneath their branches the roaring men, the horses
. . . Twigs formed webs of wood, canopies laced over groans and struggles.
Then the wind settled, curled back into the clouds, moved on? (Erdrich
223). In the quiet shock of the aftermath, Nanapush and Fleur ?were left
standing together in a landscape level to the lake and to the road? (Erdrich
223). Although Nanapush urges Fleur to remain with the tribe, she rejects
his offer. ?[W]ith her face alight,? she buckles herself to a small cart
that holds no possessions, ?only weed-wrapped stones from the lake-bottom,
bundles of roots, a coil of rags, and the umbrella that had shaded her
[dead] baby,? and sets out alone (Erdrich 224). No force is powerful enough
to reconcile the desecration of her land, but through her ironic act of
defiance, Fleur has drawn the strength she will need to survive. In Louise
Erdrich?s Tracks, Native American humor challenges fate, nourishes the
human spirit, and gives strength and hope for survival. ?The powers to
heal and to hurt, to bond and to exorcise, to renew and to purge remain
the contrary powers of Indian humor? (Lincoln 5). For the Chippewa, this
humor provides powerful medicine for the physical, cultural, and spiritual
preservation of their tribe. Works Cited Bakhtin, Mikhail. Rabelais and
His World. Bloomington: Indiana UP, 1984. Erdrich Louise. Tracks. New York:
Harper Collins, 1988. Ghezzi, Ridie Wilson. ?Nanabush Stories from the
Ojibwe.? Coming to Light. Ed. Brian Swann. 1st ed. New York: Random House,
1994. Lincoln, Kenneth. Indi?n Humor. New York: Oxford UP, 1993. Sergi,
Jennifer. ?Storytelling: Tradition and Preservation in Louise Erdrich?s
Tracks.? World Literature Today 66 (Spring 1992): 279-282. Towers, Margie.
?Continuity and Connection: Characters in Louise Erdrich?s Fiction.? American
Indian Culture and Research Journal 16 (1992): 99-115. ---------------------
carolhurst.com/titles/birchbarkhouse.html The Birchbark House is what many
of us have been seeking for many years: a good story through which the
Native American culture during the Westward Expansion of the United States
is realistically and sympathetically portrayed. This band of Ojibwa (old
name: Anishinabe) live on an island in Lake Superior and we are witness
to much of the custom and ritual, successes and tragedies of these people
who lived so closely bound to the earth. The book has been nominated for
the National Book Award for Young People's Literature. It makes an excellent
read-aloud choice for children as young as third grade and should appeal
to youngsters all the way up through seventh, at least. It's the good story
and strong characterization that lift what could have been earnest and
dull into the realm of good literature. If you are introducing children
of any culture to that period of United States history or just looking
for a good book to share, you will want this book to begin to balance the
picture. The author is a member of the Turtle Band of Ojibwa and has written
several outstanding novels for adults. This is her first children's novel
and she's done a remarkable job. The non-Indian settlers and voyageurs,
called "chimookomanug", are viewed from a variety of perspectives. One
man, Fishtail, is going to the mission school to learn their language so
that he will know what the treaties say. Deydey, Omakayas' father is half
white and loathes and ridicules the chimookomanug, yet he trades with them
for his living. Old Tallow refuses to have anything to do with them. Fishtail
compares them to greedy children, always wanting more. Omakayas herself
seems removed from their influence. The story itself is divided into seasons:
neebin (summer), dagwaging (fall), biboon (winter), and zeegwun (spring).
We first meet the main character, Omakayas, age seven, as she and her grandmother,
Nokomis, search for birch bark for the birchbark house they will live in
until next fall. Nokomis is wise and her relationship with Omakayas is
strong and loving. Old Tallow, an eccentric old woman, is another vital
character in this story. She has no children although she has had three
husbands. Tall and powerful even in her advanced age, she is guarded by
three fierce dogs and has little patience for most humans. Omakayas, however,
is treated with kindness and it is within Old Tallow that the secret of
the story lies. Comic relief is provided by the antics of Little Pinch
and by the character of Albert LaPautre who comes to Deydey frequently
sure that his latest dream is significant. Although the others are amused,
they take care not to hurt his feelings. One evening at the dance lodge,
a visitor comes with his voyageur crew. He says little and appears ill.
The scourge he brought is smallpox. One by one, everyone in the family
becomes ill. Eventually Omakayas is the only one not stricken and she nurses
the others tirelessly. In spite of her efforts, her baby brother dies.
Many others in the community also lose their lives but it is the death
of her brother that changes Omakayas into a sad, joyless being. Old Tallow
brings Omakayas her only relief and encouragement. The very last pages
of the book contain the secret of Omakayas and bring the whole book full
circle. As you begin sharing reactions to this book, stay within its confines
for a while. Read favorite sections aloud to each other and describe your
reaction. Consider each character, no matter how briefly the appearance,
in relationship to the other characters in the story. There's so much here
for discussion and research that it's hard to know where to begin. Encourage
the kids to grab the angle that appeals to them and go with it. Certainly
you'll want to locate the setting and describe the topography of the area.
Tracing the Anishinabe as they were moved farther and farther west might
be the next step. A search for the various Indian treaties can be started
at http://www.inac.gc.ca/treatdoc/ which has a copy of the Robinson Treaty
with the Ojibwa of Lake Superior in 1850. The rituals and beliefs of the
people may intrigue some readers. Comparing them to those of other cultures
may be a thread to follow. Notice the creation story that Nokomis tells
and compare it to creation stories from around the world, particularly
in Virginia Hamilton's book In the Beginning: Creation Stories from Around
the World (Harcourt, 1991 ISBN 0152387404). You'll want to compare this
view of the westward expansion with those in other novels such as: Grasshopper
Summer by Ann Turner (Troll, 1991 ISBN 0816722625.) Call Me Francis Tucket
by Gary Paulsen (Yearling, 1996 ISBN 0440412706.) The Cabin Faced West
by Jean Fritz (Viking, 1987 ISBN 0 140 32256 6) Prairie Songs by Pam Conrad
(HarperTrophy, 1993 ISBN 0 06 440206 1) The Way West by Amelia Knight (Simon
& Schuster, 1993 ISBN 0 67172375 8) The Log Cabin Quilt by Ellen Howard
(Holiday House, 1996 ISBN 0 8234 1247 4). Be sure to add to the Native
American viewpoint by including: The Life and Death of Crazy Horse by Russell
Freedman (Holiday House, 1996 ISBN 0 8234 1219 9) Plains Warrior by Albert
Marrin (Atheneum, 1996 ISBN 0 689 80081 9) Sweetgrass by Jan Hudson (Paper
Star, 1999 ISBN 0 698 11763 8) Morning Girl by Michael Dorris (Hyperion,
1999 ISBN 0 786 81358 X) Guests also by Michael Dorris (Hyperion, 1994
ISBN 0 786 80047 X). Related Areas of Carol Hurst's Children's Literature
Site Native Americans and Children's Literature, Featured Subject. Classroom
discussion, activities, related books and links. carolhurst.com/subjects/nativeamericans.html
=================
What distinguishes Google from other search-engines isn't the number of pages it indexes, nor the frequency of its updates... it's the way it sorts (or 'ranks') the results. In general, you can type in any phrase, and be confident you'll get a handful of topnotch hits within the top ten results. And I expect Google will continue to refine their ranking algorithms, so that fewer and fewer _duds_ slip into those highest ranks. But so far Google's sorting has been a strictly _one-dimensional_ ranking, and they haven't given any sign of exploring alternative approaches. My own 'content-centered' theory-- http://www.robotwisdom.com/web/ --is that, depending on the _topic_ of the search-phrase, the entire body of search-results can and should be sorted into an organised structure of subtopics... currently only by hand, by the web-page authors themselves, but eventually with more-and- more-automated assistance... For example, any time the search-phrase is (or includes) a person's name, the search-results could be sorted along a chronological timeline of that person's life: interviews where they discuss their childhood, awards they won, newsworthy events they were involved in... If they're an artist or author, there should be a definite list of the works they created, and the sorting of search-results could position, eg, book-reviews together by title, along with pages that advertise the book, and sample chapters, and chatboards or netnews-threads about the book... etc etc etc. (One important-but-ignored dimension that emerges as you explore this problem is the basic _classes_ of information on the Web: etexts, maps, images, reviews, essays, advertisements, etc.) These are *semantic* categories, and the Semantic Web Initiative (in theory) ought to be encouraging web-authors to explore such sortings. But that challenge is really an authoring challenge-- a _literary_ challenge. When it's undertaken, the result should simply be a _well-organised_ page. One has to discover the most economical and efficient _human_ presentation of the information... but one has only to visit w3c.org to see that the human (literary) dimension has never been their strong point! My site now offers, at the bottom of every web-design-theory page, a series of links to "Design prototypes" that include various experiments attacking particular classes of search- phrase-topic, trying to find the best human/literary/economical sorting of all relevant search-results. The prototypes include: topical portal : dense-content faq : annotated lit : random-access lit-summary : poetry sampler : gossipy history : author-resources : hyperlinked-timeline : horizontal-timeslice : web-dossier ...but following my onsite links will show hundreds of minor variants, as well. -------------------- 'TITANIC' WAR ON TERROR HEADED FOR ICEBERG (english) Robert Fisk 2:22pm Wed Jun 12 '02 (Modified on 2:45pm Wed Jun 12 '02) address: Independent UK article#185809 First it was to be a crusade. Then it became the "War for Civilisation". Then the "War without End". Then the "War against Terror". And now, believe it or not, President Bush is promising us a "Titanic War on Terror". This gets weirder and weirder. What can come next? Given the latest Bush projections last week, "we know that thousands of trained killers are plotting to attack us", he must surely have an even more gargantuan cliché up his sleeve. http://argument.independent.co.uk/ commentators/story.jsp?story=304347 Mr Bush's titanic war on terror will eventually sink beneath the waves Meantime, all the men who claim to be fighting terror are using this lunatic "war" simply for their own purposes Robert Fisk 12 June 2002 First it was to be a crusade. Then it became the "War for Civilisation". Then the "War without End". Then the "War against Terror". And now, believe it or not, President Bush is promising us a "Titanic War on Terror". This gets weirder and weirder. What can come next? Given the latest Bush projections last week, "we know that thousands of trained killers are plotting to attack us", he must surely have an even more gargantuan cliché up his sleeve. Well, he must have known about the would-be Chicago "dirty" bomber, another little secret he didn't tell the American people about for a month. Until, of course, it served a purpose. We shall hear more about this strange episode, and I'll hazard a guess the story will change in the next few days and weeks. But what could be more titanic than the new and ominously named "Department for Homeland Security", with its 170,000 future employees and its $37.5bn (£26.6bn) budget? It will not, mark you, incorporate the rival CIA and FBI, already at each other's throats over the failure to prevent the crimes against humanity of 11 September, and will thus ensure that the intelligence battle will be triangular: between the CIA, the FBI and the boys from "Homeland Security". This, I suspect, will be the real titanic war. Because the intelligence men of the United States are not going to beat their real enemies like this. Theirs is a mission impossible, because they will not be allowed to do what any crime-fighting organisation does to ensures success, to search for a motive for the crime. They are not going to be allowed to ask the "why" question. Only the "who" and "how". Because if this is a war against evil, against "people who hate democracy", then any attempt to discover the real reasons for this hatred of America, the deaths of tens of thousands of children in Iraq, perhaps, or the Israeli-Palestinian bloodbath, or the presence of thousands of US troops in Saudi Arabia, will touch far too sensitively upon US foreign policy, indeed upon the very relationships that bind America to the Israeli Prime Minister, Ariel Sharon, and to a raft of Arab dictators. Here's just one example of what I mean. New American "security" rules will force hundreds of thousands of Arabs and Muslims from certain countries to be fingerprinted, photographed and interrogated when they enter the US. This will apply, according to the US Attorney General, John Ashcroft, to nearly all visitors from Iran, Iraq, Syria and Sudan, most of whom will not get visas at all. The list is not surprising. Iran and Iraq are part of Mr Bush's infantile "axis of evil". Syria is on the list, presumably because it supports Hamas' war against Israel. It is a political list, constructed around the Bush policy of good-versus-evil. But not a single citizen from Iran, Iraq, Syria or Sudan has been accused of plotting the atrocities of 11 September. The suicide-hijackers came principally from Saudi Arabia, with one from Egypt and another from Lebanon. The men whom the Moroccans have arrested , all supposedly linked to al-Qa'ida, are all Saudis. Yet Saudis, who comprised the vast majority of the September killers, are going to have no problems entering the US under the new security rules. In other words, men and women from the one country whose citizens the Americans have every reason to fear will be exempt from any fingerprinting, or photographing, or interrogation, when they arrive at JFK. Because, of course, Saudi Arabia is one of the good guys, a "friend of America", the land with the greatest oil reserves on earth. Egypt, too, will be exempt, since President Hosni Mubarak is a supporter of the "peace process". |
Thus America's new security
rules are already being framed around Mr Bush's political fantasies rather
than the reality of international crime. If this is a war between
"the innocent and the guilty", another Bush bon mot last week, then the
land that bred the guilty will have no problems with the lads from the
Department of Homeland Security or the US Department of Immigration. But
why, for that matter, should any Arabs take Mr Bush seriously right now?
The man who vowed to fight a "war without end" against "terror" told Israel
to halt its West Bank operations in April, and then sat back while Mr Sharon
continued those same operations for another month. On 4 April, Mr Bush
demanded that Mr Sharon take "immediate action" to ease the Israeli siege
of Palestinian towns; but, two months later, Mr Sharon, a "man of peace",
according to Mr Bush, is still tightening those sieges. If Mr Sharon is
not frightened of Mr Bush, why should Osama bin Laden be concerned? Last
week's appeal by President Mubarak for a calendar for a Palestinian state
produced, even by Mr Bush's absurd standards, an extraordinary illogicality.
No doubt aware that he would be meeting Mr Sharon two days later, he replied:
"We are not ready to lay down a specific calendar except for the fact that
we've got to get started quickly, soon, so we can seize the moment." The
Bush line therefore goes like this: this matter is so important that we've
got to act urgently and with all haste, but not so important that we need
bother about when to act. Mr Sharon, of course, doesn't want any such "calendar".
Mr Sharon doesn't want a Palestinian state. So Mr Bush, at the one moment
that he should have been showing resolve to his friends as well as his
enemies, flunked again. After Mr Sharon turned up at the White House, Mr
Bush derided the Palestinian leader Yasser Arafat, went along with Mr Sharon's
refusal to talk to him and virtually dismissed the Middle East summit that
the Palestinians and the world wants this summer but which Mr Sharon, of
course, does not. In the meantime, as well as Mr Sharon, all of the men
who claim to be fighting terror are using this lunatic "war" for their
own purposes. The Egyptians, who allegedly warned the CIA about an attack
in America before 11 September, have been busy passing a new law that will
so restrict the work of non-governmental organisations that it will be
almost impossible for human rights groups to work in Egypt. So no more
reports of police torture. The Algerian military, widely believed to have
had a hand in the dirty war mass killings of the past 10 years, have just
been exercising with Nato ships in the Mediterranean. We'll be seeing more
of this. It was almost inevitable, of course, that someone in America would
be found to explain the difference between "good terrorists", the ones
we don't bomb, like the IRA, Eta or the old African National Congress,
and those we should bomb. Sure enough, Michael Elliott turned up in Time
magazine last week to tell us that "not all terrorists are alike". There
are, he claimed, "political terrorists" who have "an identifiable goal"
and "millenarian terrorists" who have no "political agenda", who "owe their
allegiance to a higher authority in heaven". So there you have it. If they'll
talk to the Americans, terrorists are OK. If they won't, well then it's
everlasting war. So with this twisted morality, who really believes that
"Homeland Security" is going to catch the bad guys before they strike again?
My guess is that the "Titanic War on Terror" will follow its unsinkable
namesake. And we all know what happened to that. Also from the Commentators
section: http://argument.independent.co.uk/ commentators/story.jsp?story=303766
Robert Fisk: Gangsters, murderers and stooges used to endorse Bush's vision
of 'democracy' hargument.independent.co.uk/commentators... add your own
comments ya (english) junglejaws 2:45pm Wed Jun 12 '02 comment#185816 maybe
the saudis are on the way out, or just something i read somewhere. ================
Guardian Newspaper Editorial Calls for End to Capitalism (english) Paul
Foot 10:15pm Wed Jun 12 '02 (Modified on 7:02am Thu Jun 13 '02) article#185884
However much the capitalist critic lectures his capitalist colleagues about
their individual misdemeanors, he cannot and will not correct the intrinsic
flaw in the economic system he represents, so starkly symbolized by the
greed of the people who run his bank. Is capitalism sick? Yes, disgustingly
so. Its sickness is terminal, and it urgently needs replacing. Published
on Wednesday, June 12, 2002 in the Guardian of London Cash for Chaos Is
Capitalism Sick? by Paul Foot "Is capitalism sick?" inquires a challenging
headline in the Sunday Times. The answer, over many paragraphs, is no.
Capitalism, the article reveals, is in fine fettle. The only thing wrong
with it is the occasional rotten or greedy capitalist. Hank Paulson, chief
executive of Goldman Sachs, warned the National Press Club in Washington
last week: "Business has never been under such scrutiny. To be blunt, much
of it is deserved." The Sunday Times moaned its way through a litany of
recent scandals. First there was Enron, whose disgraced chief executive
Kenneth Lay is a close friend of President Bush, whose audit committee
was chaired by former Tory minister Lord Wakeham, and one of whose more
ideological paid advisers, Irwin Stelzer, still has a weekly column in
the Sunday Times. Now there is Tyco - presumably short for tycoon - whose
former chairman, Dennis Kozlowski, is charged with tax evasion and whose
director, Lord Ashcroft, is a former treasurer of the Tory party and a
generous donor to British state education. ADT College, named after Ashcroft's
company, still teaches children in South London, but perhaps now it should
change its name, since ADT was swallowed by Tyco in 1997. Last week there
was great news for another great A: Bill Allan, chief executive of a telecoms
company ludicrously called Thus. Allan and his fellow directors got bonuses
worth 70% of their salaries to mark something called "exceptional business
performance", presumably a reference to the 72% fall in the company's share
price. Last week, these heroic As were capped by a sensational B - for
Bonfield, the knighted former chief executive of ailing British Telecom,
which recently wound up its final-salary pension scheme for ordinary workers,
but somehow managed to find a few million to "top up" Sir Peter's already
vast pension by another £2,000 a week. Bonfield has
a perfectly good job elsewhere, but when he left British Telecom he took
a year's salary (£820,000) and a bonus of £615,000,
no doubt as a mark of respect for his record as mastermind of one of the
most disastrous privatizations of modern times. These companies and individuals,
Paulson argued, are letting down the system. They are giving capitalism
a bad name. If only individual capitalists didn't lie, cheat, perjure themselves
in libel actions, stuff their pockets with grossly excessive or ill-gotten
gains, deceive the taxman by buying expensive paintings with other people's
money and then hanging them on their own walls, if only their accountants
didn't spend their extremely valuable time thinking up complicated schemes
to avoid tax and then shredding the documentary evidence, then the beautiful
symmetry of the capitalist system would shine forth. If only the rotten
apples could be rooted out of the capitalist barrels, the full glory of
the fruit could be properly appreciated. The problem with this argument
is that it overlooks the central feature of capitalism: the division of
the human race into those who profit from human endeavor and those who
don't. This division demands freedom for employers, and discipline for
workers; high pay and perks for bosses, low pay for the masses; riches
for the few, poverty for the many. Under capitalism the gulf between rich
and poor grows wider and wider. The whole point of the system is that it
works against equality, against co-operation. It stunts, insults and criminalizes
the poor; glorifies, cossets and pardons the rich. All human life is corrupted
in the process. So even if you could discipline all the offenders, lock
up all financial advisers to the US president, ban from public life all
former Tory vice-chairmen, even if company directors spent a year in jail
for every bonus they steal, there would still be no hiding place from capitalism.
The rotten apples are the barrel. Reading last week's sermon from Paulson,
I was reminded of a brace of challenging headlines in the Guardian on December
10 1993. These headlines highlighted the difference between a group of
26 million people who shared $2.2bn and another group of only 161 people
who shared $2.6bn. The first group was the entire population of Tanzania,
the second the partners of Goldman Sachs, the company Paulson heads. And
however much he lectures his capitalist colleagues about their individual
misdemeanors, he cannot and will not correct the intrinsic flaw in the
economic system he represents, so starkly symbolized by the greed of the
people who run his bank. Is capitalism sick? Yes, disgustingly so. Its
sickness is terminal, and it urgently needs replacing. ©
Guardian Newspapers Limited 2002p add your own comments original LINK (english)
you're welcome 11:40pm Wed Jun 12 '02 comment#185894 http://www.guardian.co.uk/Archive/
Article/0,4273,4431871,00.html www.guardian.co.uk/Archive/Article/0,427...
=========== used to be able to bathe the babe but (english) piet
1:15am Thu Jun 13 '02 comment#185907 . .. it'll take more intense guidance
to get it on track and off its present course of juvenile delinqencency
. . started off so co-operatively, so inquisitive, but balls are subject
to the tides of recklessness as much as the exhibitionistic variety of
mammary gland carrieress plays that sort of exposure for all it's worth
I imply it has long life ahead of itself, sure but not without a lot of
reform and forgotten norm brainstorm. You could do worse than start at
the link below. Time saving warning concerning what you might be about
to see ; if you are a lazy reader, don't bother, it hasn't been put together
that way. poetpiet.tripod.com/guest_appearances/in... ===========
Ok, end capitalism..... (english) Realist 7:02am Thu Jun 13 '02 comment#185925
Ok, end capitalism. Then what? What system will you replace it with. Please
do not use some romantic utopian ideals. Some concrete, realistic plans
would be refreshing. ============ Todd Boyle: the info gap nov 2001 The
Info-gap (how "the thumb on the scale" rules, in daily life, and how metadata
repositories can help. ) Real outcomes in life are often decided more by
the manipulation of what is in the other guy's mind than by real or tangible
things. Efficiency in business processes and the macro economy can be greatly
improved if we address this fundamental problem directly. GLdialtone home
"the map is not the territory" --Alfred Korzybski links In the modern economy,
most goods and services will be consumed by one and only one consumer --a
win/lose game. People are deadly serious about this question and the outcomes
are determined by an endlessly complicated social and political process,
ultimately, of persuasion. In all living beings, there is a gap between
perception and reality --mental models cannot fully correspond with what
is real, in the phenomenal universe. Information gaps are not new, and
have always existed, and all of us work very hard creating and maintaining
them. Nobody individually wants to reduce them, and nobody will accept
or use software which reduces their freedom or privacy, or their ability
to transmit "partial truths" and outright lies. Nevertheless, software
itself can accelerate the truth. Many examples exist in which technology
itself brought broad social changes regardless of the wishes of any particular
individual or group in society. Other humans are of course, real things
in the universe, which all of us study very deeply! Other humans do real
things like giving you their money, or power, or sex, or services. They
do these things as an expression of their free will, which is of course
highly associated with their mental models. Accordingly, real outcomes
can be influenced and manipulated by intentional manipulation of information
available to the observer, or influencing their approach to decisionmaking
or analysis. OK. Boring, so far. Right? Even animals practice illusion,
persuasion, and deception. We all try to broadcast information that creates
a favorable impression, detect and repair the outbreaks of doubt or disbelief
in the other person's mind, block the spread of information unfavorable
to us, etc. To survive as adults in this sea of lies and misinformation,
we maintain very sophisticated bullshit detectors, maintain translations
or interpretations different from the literal content of messages (often
as much as 180 degrees opposite), devalue information that is being transmitted
loudly to us, amplify information that is trying to hide from us, and in
general, invest a tremendous amount of mental horsepower in these corrective
adaptations. We create info-gaps customized for the particular audience.
What men tell women is different from what they tell the alpha male. What
companies tell investors is different from what they tell customers. Employees
invariably tell different truths to their bosses than their colleagues
or
subordinates. I happen to believe that everybody would be better off by
transmitting clear and accurate information, particularly in business.
It would result in better decisions based on fundamentals, on facts, with
full understanding and consideration by decisionmakers based on their own
interests. At present, much business and consumer decisionmaking is far
from optimal, effectively tricked into making the wrong decisions by omissions
and, well, lies. "All multifarious means which human ingenuity can devise,
and which are resorted to by one individual to get an advantage over another
by false suggestions or suppression of the truth, and includes all surprise,
trick, cunning, or dissembling, and any unfair way by which another is
cheated." - Definition of fraud, Black's Law Dictionary Clearly, much of
today's economy is affected and guided by the blocking of information.
All supply chains and distribution channels are based partly on info-inefficiency.
Nobody wants their customers to know their wholesale costs, for example,
or wholesalers or alternative suppliers. Nobody wants their competitors
to know what products are selling well, or which products have good margins,
or particularly, which products have low support costs, low returns, etc.
Nobody want their employees to know which customer account or distribution
channel or activity is really profitable, or how easy that really is. Nobody
wants their competitor to know how to organize production. This barely
scratches the surface of these things. There is a very deep academic and
industry vernacular on topics like confidentiality, security and the prevention
of industrial espionage, in which you can find catalogs and listings of
the financial damages suffered by companies from information leaks, both
on the supply side and demand side, in various industries. Furthermore,
most sales and marketing and advertising messages contain deliberate info-poisoning,
i.e. they are hardly intended to be objective. The fundamental structure
of the modern economy, government as well as corporate sectors, are held
in place by information gaps. This is so pervasive that it creates a collective
illusion that our fundamental well- being depends on maintenance of this
status quo, through privacy and confidentiality. Actually, well-being depends
on secrecy and lies only at the micro level. When customers find alternative
suppliers, your sales truly go down. When your suppliers find alternative
markets for their goods, your business costs truly go up. When employees
find higher-paying jobs, or learn the salaries of their co-workers, your
labor costs certainly go up. When customers find that your product does
not really grow new hair or improve their sex life, sales truly go down.
These are real outcomes. But at the macro level, this is a classic tragedy
of the commons where almost everybody is impoverished, needlessly. Massive
resources are wasted in manipulation of information to poison and reduce
its usefulness, and more problematically, the production of real goods
and services are corrupted into forms that do not accurately reflect the
actual desires of consumers. Reducing the info-gap is an ambitious goal
that will take 100 years, and will be wildly disruptive but our institutions
and our cultural and spiritual foundations would certainly meet this challenge
with aplomb. Organizations will remake themselves. A human is a very great
being, capable of great adjustments in the space of decades. The status
quo has at least three components. 1) huge, entrenched, willful BLOCKAGES
of information. 2) huge entrenched CHOKE POINTS where information is readable
and accessible ONLY by somebody having interests which are adverse to other
party(ies) e.g. VISA, the banks, government tax collectors, Microsoft Passport,
etc. 3) huge entrenched ORGANIZATIONS operating as naming czars, who manipulate
outcomes and prevent developments that would otherwise occur in a free
society by either BLOCKING the appearance of certain vocabulary and at
the same time FORCING the usage of other vocabularies having the results
they require. The motivation and mechanisms for this have been articulated
by George Orwell and many other authors. http://www.k-1.com/Orwell/1984.htm
The control over the names of things, and the grammars for assembling of
those names into messages controls outcomes just as the shape of the table
at the Vietnam peace talks controlled outcomes. The legal industry of course
is the primary ministry of truth, standing behind all kinds of rackets
FBO various elites, i.e. whatever arbitrary collection or aggregate of
economic interests happens to have the most money. It doesn't matter what
the industry or activity is. If it is a source of energy, you will find
the legal industry. At this point, you can say just about any damned thing
you want; we're still at a stage of trying to stop the most atrocious wrongs,
http://www.rbs2.com/infotort.htm The involvement of the legal industry
is a feedback loop that reinforces whatever arbitrary info-gap happens
to manifest first. Whatever results in economic power, eventually, results
in legal force. (I would concede, western legal and government activities
are a font of mercy and humanity at times. But when does the legal and
government industry concede the opposite?) "The law" and its courts and
courtesans now a force beyond anyone's control, from which even its maintainers
seem unable to escape. I am referring to the grammar of contract law, GAAP
reporting, copyright and trademark, definitions of all things regulated.
All of these things have spun into endlessly complex forms, which no single
person could possibly master, but which are absolutely binding and final.
Superficially they seem simple and fair. Ultimately, they are full of loopholes
in which only the best lawyer prevails. I am not leveling any accusation
at the legal-government industry that is any different from other industries:
in a mass economy of billions of people, it is subject to the same mechanisms
as everybody else, and in Darwinian fashion, it reorganizes itself around
its food sources. We are all pretty powerless before these forces as long
as we obey them. The law most directly supports and increases information
gaps by its unequal protections of stakeholders in intellectual property,
privacy, and security. A lot of this nonsense of intellectual property
and information gaps will crumble away, if everybody adopts ISO 11179 data
element registries. This is an international standard for a neutral registry
for storing the names and definitions of data, i.e. these are called MDRs
(Metadata registries) and they make the definition of language more democratic.
The process of defining data is totally open, and each actor within any
community of users can adopt elements, or ignore others. How will actors
behave, when freedom of choice in semantics exists? There may be some precedents
in the behavior of subcultures or urban gangs. Subcultures create their
own language when they feel that gives them a more accurate map to the
territory of life, or better prospects than the existing language and society
offers them. One may observe in cults and gangs, direct examples of rule
by redefining grammar - in other words, governing by lying, by creating
a reality warp. A really good metadata registry would have a very interesting
debate over specification and goals. The software would take more than
a week to write. Certainly it would never happen in the current goodcop-badcop
routine of Microsoft, IBM, Sun, none of whom sincerely wants to change
the status quo anymore than their global 500 clients or the governments
they run. The software industry itself is totally united around some shared
goals--that software industry shall obtain some rent-collecting position
in the global economy. This is fundamentally based on information spreads.
The IT industry has for 40 years been on an absolute romp, very successful
at this. Nobody who has a clue about the strategic value of information
wants to share information, itself, in any way. They always want to construct
artificial messages, to send the most positive constructions of the English
language as legally permitted, and commercially expedient. Whatever the
truth may be, it certainly has no influence on the stream of messages that
flow out of today's soviet corporations. The voice of today's corporation
is calculated to maximize revenue, i.e. hit the middle of the bell curve.
To be sure, messages are reasonably honest, since the public is not totally
stupid. Messages are as honest as necessary and not more. Markets and laws
make them honest, for example when the public already knows enough facts
that they cannot lie outright. Corporations like people, generally want
to block *all* information outward to the public and this is legally permitted,
indeed the right to "privacy" is protected by laws. If corporations could
block all outbound info. channels without harming their intake of incoming
information they would do that, in a heartbeat, giving their marketing
and PR departments complete control over the public message. Indeed many
companies continue working hard to prevent any employee talking or emailing
anybody outside the company. Software companies are typical of this. They
are always trying to gain info-advantages i.e. get clues to software design
without anybody getting clues in return. Or to lock information into their
software products, for their own enrichment first, for their users second,
and explicitly to screw and disadvantage everybody else other than themselves
and the users of their software. A software company is a classic externality
pump. It externalizes costs and harms, while acquiring and securing gains
and advantages for itself. So, under the economic rules today, the software
industry is a sick industry. Let me illustrate this with an example. If
there were a software program that did nothing positive whatsoever, but
disadvantaged everybody else in a particular industry and resulted in its
user getting 10% more sales, and the software did nothing illegal, ask
yourself, "would I buy that software?" Of COURSE you would and there are
many examples of programs like that. SPAM programs are one example at the
business level but most of these "abusers of the commons" are at the software
level. Software companies always want to appear "open". While posturing
to customers about data openness, they actually practice every possible
mechanism that customers do not understand, which effectively immobilizes
customer data or prevents users from abandoning their applications by moving
data to competitors. That is why it takes an act of congress to get software
companies to agree on any data standards. Major software companies view
standards for sharing data or descriptions of data is very similar to the
nationalization of an industry by a military junta or something. It just
takes away some of their profit margins. So, they all pretend to work on
standards, and put out the same message that they want standards. In fact,
the nature of the software market, and the incentives in our twisted system
of IP laws, ensures there must never be standards. Progress only happens
when users collaborate on this, and in America this is only possible for
large corporate users. Accordingly, large corporations have achieved quite
a lot of control over their software providers. The result is that all
software today (Windows, Linux, Unix -- all of it) is fundamentally built
around an assumption that its network interactions are oriented around
servers. Servers will control your permissions and rights on the network,
and the software on your PC will be secure and private but not sovereign.
Some element can be controlled by servers. Why are there so many millions
of people in the software industry scheming and conniving to get control
of other people's software and other people's data? This even includes
the linux communty. Why does everybody want to control my data? Why doesn't
somebody write an application, for once, that gives the user sovereignty
over his own data? Much of modern life seems to consist of continual battle
to prevent other people from manipulating us by controlling our information
stream, manipulating our data, preventing us from accessing data about
yourself, or finding direct ways to get the things you need to buy, and
sell the things you need to sell. Why don't we build something that gives
total control of the data to its owner finally and totally. Obviously nobody
would recognize its usefulness at first and would not know how to use it,
any more than the Russians know the value of freedom, free markets or free
elections. These things take generations to learn. But learn they will.
Because this device will achieve lower transaction costs at the interpersonal
level: it will enable P2P commerce with unambiguous description of goods/services,
and strong and persistent reputation. The economic savings derived from
more people getting more of what they really need, and the absence of giantism
and giant info- poisoning and lies. It will have to be a device, if it's
going to be secure. A PDA or tablet PC with something like BSD unix, since
Windows can never be secure. What you want is a hardened vault that cannot
be hacked and having all of its execution in silicon, having no programmability
whatsoever. I question the whole security model today. Windows, linux etc.
are always designed to give control to a sysadmin. Everything you see in
the market about security is **still** oriented this way. There's nothing
happening that strengthens the individual or makes the PC sovereign. Anything
that makes the PC sovereign obviously is the enemy of the state, and every
software company everywhere, and all corporations. Any sovereign computer
reduces the economic profits that are achieved by the manipulation of outcomes,
by the manipulation of information. This has to be different. It has no
rights system. Whoever buys the device can setup access to the device,
by its PIN for example. It should delete its memory after five consecutive
bad guesses, for example. In other words, the machine gives rights to whoever
has it in its possession, plus knowing its PIN. Achieving this simple feat,
reliably, will be sufficient. The key functions you need are secure messaging
(PGP email for starters), web browser, and something like MeT security
element which enables you to digitally sign *whatever you want* not just
what the government or the bank clerk puts in front of you. The big bosses
of the universe wish you could not agree to dealings with those immediately
around you, and have succeeded in creating stupendously large pyramid entities
like the US government with its $4,000,000,000,000 debt. I view this with
a combination of dread and bemusement, how so many people could be sucked
into this scam. Regardless of the hardness of the computing platform you
have to assume it would be outlawed and that the internet itself will not
carry its traffic. Who are we kidding? The internet costs hundreds of billions
to build and maintain--- it is a very expensive operation and its operators
will inevitably tighten their grip on consumers and raise prices and spreads,
on all fronts. Or go bankrupt and surrender their equipment to the remaining
carnivores. Worse carnivores. The prospects for alternate networks are
accordingly pretty grim. They will be blocked by a broad variety of legal
and economic attacks. You have to assume an entirely parallel network.
The device will need local ad-hoc radio (802.11b perhaps, in 2001) with
a built in router that automatically assembles router tables of all the
devices nearby out to at least 3 or 4 hops, and dutifully cooperates with
them to collaborate, efficiently, on routing decisions. Obviously the truth-in-commerce
device would need a mechanism for money. Perhaps a mini-mint for issuing
its own digital cash. Digital Cash mints are not high technology. The choreography
etc. are well known. Digital bearer cash = my promise to pay to the bearer,
X amount of money, or commodity. Nobody can outlaw your issuance of digitally
signed promissory notes like this in the course of buying something from
somebody. You're not a bank, accepting money in deposit. So, under this
scenario, all the people and businesses would increasingly issue promises,
rather than governments. Verifibly paper of large, established businesses
would be regarded as good money. One of the mechanisms of digital cash
mints is to enable party B to pass part or all of your scrip to Party C,
by giving party C the capability to check back with the Mint to prevent
doublespending and, perhaps, maintaining title registry of who is the rightful
owner of the scrip. The secondary purpose may be, perhaps, to enable everybody
to confirm that Party A didn't issue more Digital Cash than he has got
money in the bank (Some parties will have incentives to disclose their
currency reserves or other collateral to each other.) Digital cash mints
are not that complex, particularly when mechanisms for masking identity
are left out. The sovereign device would perhaps, never conduct any communications,
send or accept delivery of anything, unencrypted. When everything is universally,
strongly encrypted, everybody benefits. The method of ecommerce would be
exchanging standard XML documents for orders, invoices, and payments, within
a business process framework such as ebXML. The system of accounting could
be local applications, or newer ones such as shared transaction repositories
or trading systems in the sky. http://www.gldialtone.com/fbc.htm Why not?
This stuff works beautifully at low bandwidth, with low performing clients
and hosts. It allows the individual the capability to do business without
trying to taking away intrinsic freedoms we already have. http://www.gldialtone.com/glism/glism.htm
Open commerce, based on ebXML and honest metadata standards, is still workable
for giant corporations, governments, and other huge aggregation schemes.
It does however, remove their existing built-in advantages for giantism.
You want an architecture for ecommerce in which the costs of participating
in the business process do not decrease geometrically based on the size
of the company. The code rules, as Lessig said. What kind of future do
you want to create? Today-you have command hierarchies. A total P2P commerce
solution gives every sovereign party the total privacy within each binary
interaction, without injuring any other party's capabilities to operate
bigger, more complex information systems when desired. In other words you
have automated the bookkeeping saving everybody a hell of a lot of time,
without having to pick up the unnecessary burden of changing anybody's
behavior, or requiring any centrallization of anything in centralized accounting
hosts, registries, etc. that are susceptible to takeover and rent extracting.
I think the man in the street would use this device. Lots of people would
use it just because it's wireless and secure and has a browser and messaging.
People would use it to conclude ordinary business such as shopping. People
are fed up with the 5% fees on credit cards, and with banks and governments
reading all their transactions. Stores could lay one by the cash register
and it would carry on simple buy/sell choreography by itself. People selling
prohibited contraband would jump on it, just as they are already economic
bases for cellphones (buying new phones and new accounts frequently). Gray
markets would grow very very fast in every trade, etc. which now operate
in cash. The government would have to outlaw the things. It would be very
interesting. Todd Oct 4 2001 some links - http://www.agorics.com/Library/
agoricpapers/aos/aos.7.html#section7 here's more of this stuff, sorry for
my tedious writing style. Whitepaper Roadmap to GL integration with ebXML
Core components definitions for the accounting context Todd Nov. 15, 2001
======================== The argument to ABOLISH DOUBLE-ENTRY ACCOUNTING
and ABOLISH THE ASSETS=LIABILITIES + OWNER'S EQUITY equation Accounting
software, and all forms of business software, really may be viewed as a
representation or a set of symbols, for some underlying reality. Obviously
you have words like "customer ID", "supplier" and names of accounts throughout
the system, for which meanings are not ambiguous. You also have a lot of
structures representing relationships and hierarchies, which are more subtle.
Like all systems of notation and semantics, double entry accounting can
never be anything but a model. It is a map to a territory, whose validity
may be evaluated by reference to that underlying reality in the "real world".
The double-entry model is based on solid foundations mapping to the thought
processes of business owners which were prevalent during the 20th century
However, double-entry would never have happened, historically, in the presence
of today's computer platforms and internet connectivity. Those who remember
the discipline and procedures necessary to make a paper ledger balance
at end of month will understand, the system of journals and ledgers primarily
arose to improve mechanical accuracy-- a need which computers solved. The
trial balance, double-entry, and current notions of GAAP reporting are
obsolete and should be replaced. [with a network-centric model.] Systems
for buying and selling (and all execution of business transactions) have
profoundly different requirements and economic drivers. Because these are
intrinsically shared events and data, they are being rapidly constructed
in a variety of shared architectures on the internet in numerous vertical
communities, SCM and B2B hubs, etc. Systems for accumulating historical
audit trails and the associated entity financial reporting will necessarily
take a different track from the shared execution platforms above, and there
is little utility for double entry trial balance other than to the extent
it maps to the human mind. To be complete and accurate, the accounting
model would include the Subject and Object symettrically. But such models
have never taken root even in distributed systems, because so many of the
human actors is still dwelling in a self-centered model of thought--- they
install software which is capable of modelling duality the way they think,
rather than the way things really are. Todays accounting consumer wants
software which depicts "My Assets minus My Liabilities equals my Equity."
Nobody cares what the other guy's "view" of each transaction might be.
Nobody is remotely interested in an accurate entity/relationship diagram
of the whole system, yet. They cannot imagine drilling down into an expense
to understand their supplier's costs and sources. Double entry accounting
has certainly proven a durable metaphor for reflecting economic transactions.
Perhaps this is somehow related to Karma. Nothing is free. The third law
of thermodynamics states that every action has an equal and opposite reaction.
Double entry accounting is Newtonian: you record an asset only if you can
record the related liability. I suppose it is something of an achievement,
actually, that so many people have risen from the relative ignorance of
single-entry list software, to understand the usefulness of double-entry
accounting products. Recording the obvious half of any transaction into
a single-entry list is easy, for example, some cash has gone out the window.
Double-entry forces the bookkeeper to record the other half of the story:
the offsetting liability, asset, etc. connected with it. Taking the extra
time to consider the offsetting entry leads naturally to improvement in
accuracy of both the primary and offsetting entry. There are at least 10
million people using personal accounting software and most of those software
are capable of use without taking the extra step of double entry. But I
can testify very few of these people are still bringing incomplete single-entry
Money or Quicken lists to their CPAs at year-end. Double- entry rules.
You get correct bank and receivable balances. Let's look deeper. The balance
sheet is intended to be a snapshot of the company's karmic balance in the
marketplace, at a point in time. Every asset or liability that has been
recorded must be balanced by identifying it with the other asset or liability
or equity account representing that trading partner, owner, debtor or creditor
with whom the business transaction was concluded. I believe that there
is an entry for the "self", and an entry for the "other". Income and expenses
are of course part of the balance sheet. They are an explosion of the detail
of retained earnings. Accountants are well aware of this but it bears repeating:
there is really no such thing as an income statement. It is a breakdown
of the retained earnings account in the equity section, for a particular
period. The balance sheet is really an income statement, and the income
statement is a balance sheet. There is no such thing as a balance sheet
or income statement. They are both artificial constructs. It is a myth,
that the balance sheet represents a "State" and the income statement represents
"Changes in State". This is one of the biggest psychological mistakes people
make. You've just added up a diverse collection of events, which are apples
and oranges, and added them up into totals. Displaying all these totals
within a format such as a "balance sheet" and "income statement" provides
no use at all. The entire edifice of financial reporting and audit of publicly
listed companies is a pragmatic creation, born of political economy. It
is a residual legal artifact of the historical opposition between corporations
who do not want to disclose, and shareholders who require degrees of disclosure.
As such, this statutory reporting edifice has no reliable compass and is
arbitrary. Nobody is going around deconstructing the two-column trial balance,
or questioning its metaphysical accuracy as a model of commerce. DOUBLE-ENTRY
IS WRONG AND OBSOLETE: What we are really modelling in the classic notion
of an accounting system is a sequence of transactions. That's all it is.
A historical trail of events, which were real enough, but recorded in a
particular, peculiar way. You begin with zero of anything. You start the
model at a point in time. You begin conducting business, and make a notation
for cash in the bank (whatever THAT is-- a subject for another day.) The
totals on your balance sheet begin to increase and decrease. Those are
not real things in any sense; rather they are just running totals of how
much water is in each of the buckets. Look at the typical SME (small/midsized
enterprise) income statement for example: SENDING RECEIVING REVENUE PURCHASE
ORDER INVOICE SETTLEMT OF INVOICE CASH/ONETIME PAYMENTS EXPENSE SALES ORDERS
BILLS SETTLEMENT OF BILLS CASH/ONETIME PAYMENTS PAYROLL This really should
be an N-dimensional grid since everything on the revenue side is really
somebody else's expenses, and vice versa. You would need to have a sheet
for each party in the global economy. Whenever you are forced to put a
3-dimension cube on a 2-dimension paper, you have to use recurring bands
of rows, and that is exactly what we have in every accounting system in
the world: first you iterate thru the incomes, then you iterate thru the
same exact transactions as expenditures. The recurring rows are natures'
way of saying, something's wrong. Accounting is such a hall of mirrors.
Luci Pacioli didn't have any three dimensional software so he built it
on 2-dimensional repeating list. he didn't have any network so he built
a model of the reciprocal entry of his trading partner into his own general
ledger. The whole thing is stupid. HOW TO FIX IT 1. The world's accounting
infrastructure needs to move out to a shared architecture on the internet,
because all transactions are inherently interconnected and shared with
trading partners. The access security will be solved. Don't talk to me
about it. The execution systems are already moving to the internet. If
the internet architecture is secure enough for actual execution of orders,
deliveries and payment it is certainly secure enough to provide views of
those transactions to their historical owners and block them from the view
of everybody else. 2. The historical reports and views of transactions
have to be redesigned to support their multidimensional nature. It is a
requirement, that we escape from the limitations of the "income statement"
and "balance sheet" which are riddled with denormalized views, repeating
rows such as described above. The user should be able to navigate (drill
down) from their root ledger or interface, into totals such as money balances
and types of transactions, in their native hierarchies that really exist,
to get views by date, by supplier or customer, by various categories of
goods and services, and other attributes as they so desire. The root ledger
interface would have a single, top level node called "Me", analogous to
the root node in an XML document. There are hierarchies of subtotals that
would appear when you drill down, would be the native characteristics and
attributes of the transactions that you have executed, in accordance with
the terms of those deals with your trading partner agreed at the moment
of the transaction. There would be one primary record of the transaction
on a shared host someplace. You would only have indexes to the real entry.
You could maintain your own additional labels and attributes privately.
But the application of later judgment to recharacterize and re-label things
would be reduced. Some people would be systematic. Businesses would subscribe
to a shared ontology such as the XBRL taxonomy and agree to label everything
that way. Easy solution. Other people would leave the descriptors blank.
Who cares. When they drill down into the "unclassified" row, they would
get the native listing, just like you see in your personal checkbook. That
works just fine for most people in the US. The whole assemblage of your
transactions on diverse shared hosts on internet would be no more, and
no less, auditable by governments than today's paper and cash obsolete
systems. Your transactions would be on numerous hosts all over the internet,
maintained in encrypted form not even accessible to the sysadmins in most
cases; the owner would have the encryption key. You lose the key, effectively
you erase the audit trail. Giving your key to a sysadmin is infantile anyway.
It is the original mistake that causes all the bad outcomes and scenarios.
Why do we beg, and kiss the asses of regulators, to control invasions of
our privacy? This is a real head-scratcher, to me. We can control all those
things, unilaterally. The overall income statement or balance sheet would
cease to exist, it is simply irrelevant and a fiction. It doesn't exist.
The periodicity of one year doesn't exist. All periodicity is an arbitrary
construct. These arbitrary labels and structures cause more harm than good.
The world is immediate and spatial. It is not a two-dimensional static
report. What businesses need are tools for efficiently conducting business.
Tools to answer specific questions about liquidity and receivables and
so forth. The tools and reports for those needs lie completely outside
the financial statements of course, and have nothing to do with double
entry. http://www.gldialtone.com/EndRedundancy.htm and http://www.gldialtone.com/Exploration.htm
. * Todd F. Boyle CPA http://www.GLDialtone.com/webledger.htm * tboyle@rosehill.net
Kirkland WA (425) 827-3107 * XML accounting, WebLedgers, ASPs, GL dialtone,
whatever it takes =================== From: Todd Boyle [tboyle@rosehill.net]
Sent: Tuesday, July 11, 2000 1:14 PM To: xbrl-public@egroups.com Subject:
Draft Schema for General Ledger As SMEs (small/medium enterprises) begin
to conduct business over the internet, their systems of storing information
must change. Webledgers will dissolve the "accounting" business. The word
"Accounting" doesn't mean anything. There isn't "accounting" software anymore.
I'm not sure there is even *business* software. There is just commerce.
Still, there must be an abstract model of your financial life, i.e. your
balance sheet, whatever you want to call it. I call it a webledger, or
the root ledger. When these models live on the internet reconciled with
everybody elses' model, life is going to be a whole lot different... For
over 25 years, business software became more and more integrated as it
relentlessly pursued automation of cash, payables, receivables, and inventory
across all the systems in the office. We learned after great pain, that
our software had to be integrated, and "integrated accounting software"
meant choosing a single vendor to supply all the modules. Now, we must
unlearn that lesson, for these reasons: 1. The internet manifestly contains
thousands of excellent BSPs (business service providers) which are the
"software modules" for conducting business. These include every selling,
purchasing, payment and admin activity formerly accomplished on desktop
or LAN based software. These thousands of BSPs will not in near future,
be owned by the same company, 2. There is no likelihood that all of your
customers, suppliers, and services providers will use the same BSPs. This
fact will cause millions of businesses to experience great pain, while,
others will understand and accept this immutable fact. For the last 70+
years, the general ledger has been the absolute foundation of accounting.
Classic double-entry accounting (CDEA) provides a whole set of double-entry
solutions and journal entries encapsulating every possible transaction
type in the business universe. The CDEA and A=L+OE system of representing
business transactions was, if anything, strengthened during the past 25
years of integrated software. To repeat: millions of accountants practice,
every day, this system of notation for recording and summarizing business
dealings. This is a language, a set of symbols, and a whole bunch of rules
that are so deeply entrenched that no other system of notation can gain
market share, at the human interface level. Business systems have obviously
moved on, becoming optimized for different things and paying not the least
attention to CDEA. Especially in internet commerce! But when you get to
the controller or the CPA or even the AP/AR clerk, whenever you have a
human being, CDEA is entrenched at the user interface and in reporting.
One place to begin deconstructing CDEA is the chart of accounts. The notion
of an "Account Code" may be obsolete; the Chart of Accounts has historically
been overloaded with multiple uses. It has often been an intricate, denormallized
table combining multiple codes such as organization structure, department,
and accounting classifications. These are called "segmented" account codes,
for example 5110-10A Overtime - branch 10 (peoria office) department A
5110-10B Overtime - branch 10 (peoria office) department B etc. Charts
of Accounts have served multiple needs including ease of keypunching, permissions/internal
control, reconcilation with external entities, and downstream reporting.
Charts of Accounts are an unmanageable mess and obstacle to systems improvements
in many companies. Most accounting software does not provide adequate solutions
for retroactively changing or merging account codes in the chart or in
the transactions databases. For example, Quickbooks enables overwriting
of account codes with a new ones, and merging accounts. This is highly
useful in small business but you can no longer reproduce financial reports
that had been published before you changed the codes. Whenever account
codes carry information for several different dimensions, companies should
explore other accounting software that reports those dimensions separately.
For example, the Chart of accounts should NOT encode information related
to organization structure, geographic, product, party, or any other unrelated
information. YES, those may be required in a general ledger. But NO, they
should not be combined into the chart of accounts, especially as the need
for global integration increases. I argue that general ledgers should not
even have "Charts of Accounts" in future. After you strip away the tangled
up mess of information combined into charts of accounts, the only remaining
purpose for account codes is to reflect classifications for the GAAP domain
(financial reporting). XBRL now defines the available GAAP classifications
natively, in the U.S. XBRL is the universal "chart of accounts". The words
"Chart" and "Account" themselves, are artificial and contrived. In the
segmented example above, the user interface and the data storage for the
accounting system would contain an XBRL classification, a geographic (branch)
code, and an organizational (department) code. Voila. No "chart of accounts".
(Why does this matter? Because owners, managers, and parties to transactions,
cannot achieve economies or effective transaction processing over internet
if everybody continues using incompatible naming and coding of transactions.
Bear with me.) Now let's consider some emerging problems in classification
of accounting transactions. The transactions need to be transported among
and between 3 discrete levels of software comprising the business system
of an SME: Level 1 - the SELF. the comprehensive, complete general ledger
which models the enterprise (the root ledger) Level 2 - the BSP or business
software module, or sub-ledger or functional module. Historically the sales
journal, purchases, payroll or other journal or module, running on the
desktop or local server. These functional modules are increasingly provided
remotely on ASPs or BSPs or DotComs thru which we buy and sell. For example,
web stores, timesheet systems, payroll services, purchasing portals, and
trading hubs and marketplaces of all kinds. These BSPs enable SMEs to conduct
business over the internet, integrating customers and suppliers and payment
providers, making non-repudiable orders and payments without large investment
in local firewalls. Level 3 - the OTHER. the 3rd parties with whom we buy
or sell thru our website or BSP, or thru their website or BSP. There is
a 3rd party with whom you conduct business, who insists upon data integration
just like you. This is like the evolution of plant life into animals, who
had eyes to see. The unconnected desktop software are vegetables--they
can't understand anybody exists except themselves. They are profoundly
autistic. Obviously there are many XML vocabularies between level 2 and
3. However, Level 1 will exist for SMEs in a way that it has never existed
for B2B commerce, EDI and so forth, because larger businesses have combined
Level 1 and Level 2 in enterprise software. Large ERP/enterprises literally
conduct business thru their firewalls, with third parties (the OTHER) thru
EDI and other custom connections. SMEs will conduct business thru multiple
services (ASPs or BSPs). They will provide summary entries in a whole variety
of custom formats, which must be viewed by the root ledger as though they
were subledgers. These General Journal postings will need to contain classifications
for posting to our "general ledger". This will really be a Root Ledger
since some of these DotComs will themselves be General Ledgers having trial
balances. The unthinking assumption of every BSP and dotcom I have researched,
is that every SME will have their own custom Chart of Accounts and that
any interface to the root ledger, accordingly, must be visited and configured
and mapped for them, by an expert. Does this sound a little like EDI? It
might be a good thing if XML transactions arriving at the root ledger of
the company, contain some minimal classification as to the broad category
of the transaction. This could reduce the need to push custom account codes
or other business attributes outward, to the subledger or module for every
party, for every transaction, thru every trading hub. It would be nice
if all DotComs, ASPs and BSPs could agree upon an XML schema for the general
ledger, which would define unambiguously how to transmit their dealings
to the root ledger by reference to the values permitted within that GL
Schema alone. The alternative is years of the usual nonsense of asking
the CPA what chart of accounts code to use in each case, futzing with local
configurations and then configurations of the interfaces. Think about this.
There WILL be a general ledger folks. It WILL have account classifications.
I believe it is possible to articulate a "standard chart of accounts" coded
into the GL Schema. It might have only 20 codes. These are never very abstract.
They are always Payables and Receivables of various kinds (including notes
receivable etc.) and of various expected maturities. Even a cash account
or merchant card receivable is really a type of receivable. Maybe this
is the direction we need to go-- leave out the accruals, depreciation and
other stuff not needed by the ecommerce community and instead, focus intensely
on the business purposes which can only be provided by the root ledger,
and are inherently impossible to achieve in distributed modules on multiple
BSPs. Inherent root ledger requirements: -------------------------------------
- maintain fiscal control (e.g. control totals) over money and accounts
receivable and payable that exist out on the remote DotComs and functional
modules as well as local systems, fixed assets etc. - maintain near-realtime
information of cash balances, cash flow attributes of all events which
have happened, and cash flow sources and needs predicted at future dates
sufficient to manage the cash flows of the business. - consolidated views
of accounts receivable / accounts payable sufficient to maintain control
over supplier and customer balances when the enterprise touches those same
classes of customers or suppliers thru multiple points (e.g. selling locally
and on web storefront; buying locally and thru web purchasing portals)
- GAAP financial statements / financial reporting - timely and accurate
tax reporting - consolidations and eliminations whereever multiple sub-ledgers
exist, including multiple "general ledgers" on BSPs as well as companies/locations.
- foreign currency translations Don't believe me? Think for yourself. Companies
will conduct business via multiple BSPs, resulting in transactions stored
in multiple locations. Most business processes do not inherently require
any shared data, for most users. For others, the bank statement provides
a unifying view. But are any of the above requirements possible without
a comprehensive, combined view someplace? The answer is no. These are the
classic financial controller/treasury functions. Companies will still need
to implement GAAP classifications at some level. Historically, companies
implemented finely granular, custom charts of accounts sufficient to enable
GAAP reporting. The XBRL schema, AICPA-US-GAAP-CI-00-04-04.xsd contains
over 1100 ELEMENTS classifications. Companies could implement true GAAP
classifications, natively, in their accounting systems. Historically, to
produce financial statements, companies often mapped their charts of accounts
into the lines of their financial statements, - combining some accounts,
- changing the order of some accounts, and - changing the label of other
accounts. XBRL can be immediately implemented by any accounting system
by storing the appropriate XBRL tag for each account into the chart of
accounts table. For example if there are multiple accounts receivable,
they might all be tagged with the XBRL tag, "aicpa:currentAssets.receivables"
For any non-accountants reading this: accounting entries are usually stored
as a collection of rows, each having an amount and an account code representing
the different results of a transaction (for example the simplest sale might
be a 3-line journal entry: sales 100, tax 8, account receivable 108.) There
is absolutely no reason this system of notation is necessary; however there
is absolutely no likelihood that millions of fulltime accountants worldwide,
will change anytime soon. XBRL also enables root ledger scenarios in which
BSPs encode every row of transaction *directly* with its XBRL code. Note
carefully, the contrast with my previous paragraph in which the XBRL code
was associated with the chart of account code. GAAP reporting can be achieved
without any other "chart of accounts" than XBRL tags. SMEs always have
multiple assets and liabilities which are in the nature of payables and
receivables with respect to 3rd parties-- the combination of the XBRL tag
and party identity would establish uniquely these accounts and their transactions.
Isn't that more transparent than establishing a numeric account code? Traditional
account codes were useful in data entry and other mnemonic processes which
are de-emphasized in root ledger architectures. Returning to the question:
what chart of account classes exist within a root ledger architecture,
whenever the enterprise uses BSPs or other distributed processes? The answer
appears to be nothing other than XBRL tags. XML Schema for general ledger
------------------- Then, what structures should be included, within any
XML Schema for general ledger? The following would please me very much.
I have simplified the root ledger to a single table, requiring no Chart
of Accounts table. Here is my straw man draft, for the "R" required fields
and some optional fields, of a root ledger table (which translates immediately
into a very simple XML Schema for general ledger-- a completely flat set
of ELEMENTs.) Host - system, host or software where this data originated.
R Company - the legal entity which executed this transaction. OrgUnit -
department, section etc. Geographic - branch or business location for sales
tax Country - ISO 3166 (optional because default driven, in practice.)
Language - ISO 10646 (optional because default driven) Journaltype - sales
journal, purchase journal, etc R TransactionID - uniquely identifies this
transaction (set of rows) R TransactionRow - uniquely identifies this row
(e.g. line of journal) R TransactionDateTime - ISO 8601 date/time TransactionType
- Actual, Budget, Forecast, TaxAdjustment (default=A) EnteredDateTime -
ISO 8601 date/time User - authenticated user recorded this transaction
Reference - source doc. number or index, etc. R XBRLtype - classification
of this transaction for GAAP reporting R Party - uniquely identifies the
reciprocal party to this transaction. Partyclass - role or type of party
as customer, supplier, etc. R Amount - any number consistent with your
currency. Currency - ISO 4217 Currency codes (optional because default
driven) Description - explanation text or memo string etc. TransactionCode1
- user defined category, class, T-code etc. TransactionCode2 - user defined
category, class, T-code etc. TransactionCode3 - user defined category,
class, T-code etc. TransactionCode4 - user defined category, class, T-code
etc. TransactionCode5 - user defined category, class, T-code etc. Maturity
- if this is an external payable/receivable, when it is due Cleared - mark
this row as matched, settled, cleared, etc. Approved - approval boolean
for internal approve. Reviewed - boolean for external auditor or reviewer
Employee - internal party executed this transaction Job - uniquely identifies
the job, project, etc. Product - uniquely identifies the product or service.
XMLdocument - any XML business doc. accompanying this transaction. Obviously,
in the accounting business my XML Schema will drive somebody else to apoplexy.
I am fully aware of dozens of problems with the above table design and
blissfully unaware of dozens of others! Don't you just hate it, when you
have to actually identify specific fields, to actually get some work done?
For those .01% of companies that have more than 5 transaction codes on
their GLs, let them edit the DTD and extend it. Or stuff codes into the
XML document field. Or, XBRL GL schema can have 10 transaction codes and
that will cover all but .001% of companies. Let the .001% eat cake. While
XBRL delays the GL schema, 100% of SMBs are "eating cake". Perhaps what's
lurking under the surface is a tension between developers who are already
running traditional procedural code and CDEA, versus the newer developers
who depend on various object-oriented structures, set oriented commands,
XML transformations, etc. and hate writing code to assembling/disassemble
their new data structures into bands of general ledger journal entries.
Most of the new/better vertical and horizontal business modules are probably
not procedural CDEA under the surface. However the new technologists are
negligent in articulating any replacement for A=L+OE semantics, and the
thousands of different kinds of CDEA postings currently used. Accordingly
the "object" people, XML transformation whizzes, etc. basically don't have
any business solution for the root ledger requirements other than CDEA.
In any case, it will be very useful and worthwhile to publish a GL standard,
that can transport data at least among desktop software like Quickbooks
and Peachtree, and among small business BSPs on the internet. This flatfile
would enable many business processes without even the need for subledgers,
AR, AP, or sales/purchase journals etc. Let's get moving and define an
XML schema for general ledger. It's going to be CDEA, folks. It consists
of Rows. And there are going to be a potentially large, potentially arbitrary
collection of fields on each row. So let's get to work. ================
user sovereignty: User Sovereignty This is an unfamiliar topic to some
users of computer and internet. Below, you will find a life preserver,
thrown to us by people who have put more thought into it than the average
CPA or accountant. I preface these readings with one simple observation:
people are born free, and are inherently free. There cannot be any tyranny
or exploitation but by the ignorant surrender of one's natural freedom.
We are all free to act according to our own interests. Nothing is forcing
us to subordinate our wishes to the wishes of companies who would like
to exploit us. Today's computer hardware and software are profoundly shaped
by the imperative of making money for their vendors. Computers are not
a pure utility like foodstuffs, minerals, or energy but rather, contain
a whole spectrum of tricks and techniques that create additional income
for vendors. We have gradually allowed another "broadcast industry" model
in which we gave away 1/3 of our precious time to advertising, and allowed
the commercial selling imperative and message to permeate even the editorial
content of newspapers, radio and TV. In the accounting software industry,
these have gotten so far out of hand that they threaten small business'
ability to exist in the marketplace, alongside larger companies. Before
the internet, it was not a fatal problem that your vendor locked in your
data and refused to provide interfaces. But today, if you cannot buy, sell,
and pay and receive money over the internet freely, you may be blocked
from access to markets themselves. You cannot access goods and services
at the best prices, and you cannot sell for the best prices. When you go
out to evaluate accounting and business software, keep it in mind: the
software should work 100% for the interests of the owner. Not one single
bit or byte should be allowed to pursue the interests of the software vendor,
or tax collectors, or accountants, or employees, agents, contractors or
any other third party, against the interests of the owner. Owners do have
social responsibilities, and responsibilities under contracts, and under
laws. But until those software vendors or laws require owners of businesses
to install automatic enforcement machines in our own homes and businesses,
taking away our freedom, it is sheer lunacy to buy them voluntarily. *
Todd F. Boyle CPA http://www.GLDialtone.com/ * tboyle@rosehill.net Kirkland
WA (425) 827-3107 * XML accounting, webledgers, BSPs, ASPs, whatever it
takes http://www.openresources.com/documents/ http://opencode.org/h2o/
http://cyber.law.harvard.edu/projects/governance.html http://cyber.law.harvard.edu/projects/opencode.html
http://cyber.law.harvard.edu/projects/security/ http://www.flora.org/flora.comnet-www/1472
http://internet.tao.ca/ http://www.people.cornell.edu/pages/pcs10/revised3.htm
http://www.buildfreedom.com/ft/internet_freedom.htm So, what would the
internet look like if it had been designed to serve users instead of .
What if there was a vast, infinite storage space, just a whitespace and
you could put unlimited content out there. What if everybody on the network
was required to use 1024 bit encryption because it was built into the clients
on the network and therefore, even dumbshits encrypted 100% of their stuff.
That's the way Windows and Unix and Linux could have been designed, but
they were not. They were designed for top-down control. The client for
this network? a pure w3c XML parser and XSLT transformation engine. Is
there any kind of content or processing that can't be done or rendered,
etc. by an XML processor? And, when you go to read your stuff or process
with it, you draw upon all the CPUs that are sitting idle nearby... That
would be a computing environment worth having, because it would take back
control of security from sysadmins and affix it firmly in the leaf nodes.
heh heh! Giving access control over the data to the individual, permanently
and irreversibly. The data storage could be same as today, either peer
to peer or on some ISP or ASP or even a relational database. Why not? Finally,
a good use for an RDBMS. As soon as the peer network has a general ledger
built into it, it would start to be adopted. Right now, there is noplace
to put your quarter to pay for your resources. Isn't that ridiculous? Surely,
this won't continue much longer. Capitalism and the ideology of self determination
unleash much greater economic growth because more people get more of what
they want. The entire economic system is organized around discovering desires,
amplifying them with advertising, and satisfying them. The whole psycho
limbic system of the body and mind of customers is tapped. The worker is
harnessed more completely. Everybody is compelled to work within a single,
pooling of interests. There is only one road system. Only one downtown.
Only one school system, telephone and banking system. Our combined power
makes us a more powerful society. We can out-produce any country that opposes
us, or destroy them if that serves our desires more efficiently. This is
democracy because we can vote every 4 years. This proves the people are
in control. Isn't that nice? We have this radically symmetrical economy,
tuned in to the average desire of all people. We are completely harnessed
to serving our neighbors' base desires for oil, SUVs and sports arenas.
What next? Is this freedom? Is this the way it's going to be? ==============
Subject: Web ledgers -marchin to the freedom land Date: 06/10/2000 Author:
Todd Boyle I'm responded to CIO magazine, question April 1, http://www.cio.com/archive/040100_diff.html
The most interesting question to me, is how individuals can control their
spending decisions with greater granularity, for example, obtaining products
and services through the market system, and participating in large joint
enterprises like corporations, without giving away power to bosses and
hierarchic forms of organization. Let's say, for instance, a persons' convictions
are nonviolent. Today, when we buy a product it is like representative
democracy. It's all-or-none, and you delegate your power to your supplier.
You don't have a line-item veto. You don't even know who is getting the
money. On some level, when you pay the clerk at the gas pump, you know
some of the money is going for military defense, etc. but you don't know
how, how much, or how to stop it. There are some minor outbreaks of consumers
buying tuna from canneries where dolphin-safe practices were followed,
or wood from renewable forests. As value networks and supply chains permeate
the economy, we will have more information about where our money is going,
new choices where to buy, and that's a fancy way of saying, more capability
to pay the oil driller and refiner but not pay for the foreign extraction
tax. You will buy coffee, but not pay for the military dictatorship, etc.
We will similarly have new choices where to sell. We will have new choices
with whom we wish to work as partners. Economic decisions will be made
based on network participants, costs and revenues will be allocations within
partnerships. Within this model you have freedom, and there is no way for
corporations to extract rents and dividends or coerce consumptive behaviors,
or control bodies for 40 hours a week. Here read this: http://www.gldialtone.com/hypercub.htm
which explains how the entire economy may be viewed as a mathematical model,
with each actor receiving (rather imperfectly) their allocations of costs
and income. The new architecture of business has pierced the company-centric
view, and moved to hosted systems which span enterprises. Search engine
on value networks. This is analogous to the development of human consciousness
beyond the individual appetite or ego, but nevermind. The internet is the
operating system, and BSPs and dotcoms are the programs. There are 1500
BSPs, look at the directory www.aspnews.com and all the XML interoperability
and middleware. Web ledgers give the individual much more power, more granular
control over where and how their money and power are spent. Accordingly,
they are a better solution anyways and more consistent with the existing
culture. I hope I have given you a glimpse into this fascinating new field,
* Todd F. Boyle CPA http://www.GLDialtone.com/ * tboyle@rosehill.net Kirkland
WA (425) 827-3107 * XML accounting, WebLedgers, ASPs, GL dialtone, whatever
it takes Webledger technology is just a quantum leap more intelligent and
multidimensional than today's "money system". The unrestrained growth in
the power of the state, and the percentage of the national economy directed
by the government, is like a cancer. Too many people believe in simplistic
answers like Napster, anonymous commerce, digital cash, tax evasion, etc.
To make a meaningful, and lasting contribution to human freedom, we have
to grow newer financial and software structures that are literally, stronger
and more efficient than the state-corporate complex that is strangling
us today. That structure will certainly emerge in internet marketplaces,
of course. Today's boycott by suppliers, wishing the B2B marketplaces would
die and go away, cannot last. Bricks and Clicks will finally be made accountable
to customers, because customers will certainly be aggregated, and rich,
granular views will be provided to those customers into the vendor choices
and the detailed attributes inside the product. The unnecessary layers
of costs will be chased out of those products. DO you see what I mean?
TOdd
|