2008-07-28

a ergonomic keybinding for emacs

This page is for discussions and comment for the page

http://xahlee.org/emacs/ergonomic_emacs_keybinding.html

2008-01-22

The Jargon “Lisp1” vs “Lisp2”

Perm url with updates: http://xahlee.org/emacs/lisp1_vs_lisp2.html

Why You should Not Use The Jargon Lisp1 and Lisp2

Xah Lee, 2008-01-10

[The following is originally posted as a reply in comp.lang.lisp about “Lisp-1” vs “Lisp-2”.]

Someone (Propon...@gmx.net) wrote:

Having read Touretzky's introduction and the first half of Paul Graham's On Lisp, I'm wondering what the advantages of a Lisp-2 are over a Lisp-1

It seems to me that a Lisp-2's ability to use a single symbol to represent both a function and a value is a minor advantage, although I'm sure some regard it as a disadvantage. On the other hand, a Lisp-2 requires the clunky, IMHO, #' operator and cannot have a elegant, universal DEFINE like Scheme's.

Yet I've heard that a Lisp-1's macros are necessarily less powerful than those of a Lisp-2. Is that true? Are there some other big advantages of a Lisp-2 that I'm missing?

Please try to avoid the jargons lisp1 and lisp2.

Recently i have just wrote a longish essay on the harm of jargons in functional languages. The jargon lisp1 and lisp2 is one of better example that illustrate the issue.

• The jargon is opaque. The words do not convey their meanings.

• Being a opaque jargon, it is often used subconsciously by people in a group, to communicate that they are in-group. (a class-differentiation strategy of human animals; as is much of slang's purpose) And consequently, these jargons are thrown about often without the writers actually understanding, or wishing to discuss about it in any way.

Further, this issue is relatively minor, having little real-world practical impact. It is not unlike a war about which end of egg one should crack. (i.e. big endian vs little endian; See: Gulliver's Travels. PART I — A VOYAGE TO LILLIPUT, Chapter 4)

Why is this issue minor? Consider it broadly in human animal's computing activities. I give 2 examples:

Consider that in the 1960 people went to moon. (Moon landing) Imagine what complexities involved in the physics, mathematics, computation, at a time when computer are some one thousand times slower than today, using punch-cards, and there are not much of computer languages, not even modular programing.

For another example, consider today's PHP language. Linguistically, it is one of the most badly designed language, with many inconsistencies, WITH NO NAMESPACE MECHANISM, yet, it is so widely used that it is in fact one of the top 5 most deployed languages. If one of PHP or lisps and all associated applications written in them is to suddenly disappear from the face of this earth as a catastrophic punishment from Zeus, and all the leaders of nations is to have experts to assess the damage as they do with natural disasters, it is probable that PHP would be a order of magnitude greater loss.

Now, suppose we narrow the scope of “lisp1 vs lisp2” to its context: computer language design. There are many issues in language design. For example: dynamic scope vs lexical variable scope, various models of typing systems (dynamic, static, variable/value based, algebraic types, no types, with or without inference system), computing paradigm (OOP, Functional, procedural, pattern matching, database), evaluation model (greedy vs lazy) ... etc. Among all language design issues, the “lisp1” vs “lisp2” is really one of the least significant, which actually arise practically only in Lisp due to its peculiar concept of its “symbols”.

The existence of a name to a concept or idea, especially a opaque jargon, tends to get people to throw the name and argue about it unnecessarily.

To people in the lisp communities, please stop using the term. If necessary, say Common Lisp's model or Scheme Lisp's model, or, use a communicative term like multi-meaning-space and single-meaning-space.

How Did Multi-meaning-space Came About

Now, in the following, i wish to discuss some associated issue. In particular, thoughts on how multi-meaning-space came about.

There's a curious question. Why is the “lisp1 vs lisp2” happens only in lisp, and we don't have “perl1 vs perl2”, “java1 vs java2”, “ML1 vs ML2”, or any language with a variation on this?

This has to do with the concept of lisp's symbol, which doesn't exist in other languages (notably except Mathematica).

Now, a further question is then, why Common Lisp's symbol of a particular name can have multiple meanings? (That is, a name in CL can both be a variable and a function in the same block of code at the same time. (This peculiar fact, we might give it a professional terminology for ease of communication, and we might call it: Common Lisp's multi-meaning-space, or just multi-meaning-space, meaning-space.))

Now, the question is, why do Lisps before Common Lisp have this multi-meaning-space feature?

I do not know much about the technical aspects of Lisp's history. However, i can venture a educated guess.

Old Lisps's multi-meaning-space feature, just like so many of its features (the cons cell for lists, the function “sort” being destructive, and semi-regular syntax using nested parens, etc), is simply designed as is without necessarily explicit, important, rationales. In other words, it is probably a characteristic that happens to be convenient, easy to implement, or not thought about at the time, or simply went one way than the other. (as opposed to, prominent issues that calls for conscious, explicit, decisions with important ramifications, such as syntax (sexp), symbols system, evaluation model, ...etc.)

Now, as i mentioned before, this (single/multi)-meaning-space issue, with respect to human animal's computing activities, or with respect to the set of computer design decisions, is one of the trivial, having almost no practical impact. And, because some human animals, in the history of their power struggle, is produced the byproduct of the jargons “lisp1” and “lisp2”. And due to the fact that which end of egg to crack is now blessed with a terminology that has all the countenance of impartiality, it furnishes and fuels countless arguments and fightings on this non-issue between the Scheme Lisp and Common Lisp factions, even when the origin of the power struggle on this particular issue (the Common Lisp Standard) has long died. (more specifically, every few months the issue will rise up in comp.lang.lisp or comp.lang.scheme, with all colors and types of activities from sincere to trite to re-examination to political struggle to pacification.)

Note: for the technical details of the Common Lisp's meaning-space, and the origin of the jargons “lisp1” and “lisp2”, see: Technical Issues of Separation in Function Cells and Value Cells, by Richard P Gabriel, Kent M Pitman, 1988.

FAQ, and The Meaning Of Name Space

Xah Lee, 2008-10

The terms “Lisp-1” and “Lisp-2” were invented to expressly *avoid* mentioning Common Lisp or Scheme, therefore we shouldn't use the terms such as Common Lisp model or Scheme model to describe it.

The creation of these terms is in a context of a political struggle. Namely, in creating the Common Lisp standard, Kent Pitmant has tirelessly and repeatedly pointed out in the past decade here.

As i mentioned in the article, the political context for the birth of lisp1/lisp2 is over, about 2 decades ago.

Except when discussing history, there's no need to stick with the jargon lisp1 and lisp2.

If fact, i argued, if not for so much of the colorful jargon “lisp1/lisp2”, the issue wouldn't even be much noticeable as it is relatively unimportant. Common Lisp today is multi-meaning-space, and Scheme Lisp is single-meaning-space. That's just a language fact and there's not much one can do or want to do much about that in practice.

Also, another point Kent has mentioned is that people in the community naturally picked up these terms. Actually, in my opinion, the popularity and continued use of these terms has much to do with Kent's promotion. In the past decade in “comp.lang.lisp”, he constantly reminds people of his article, and even explicitly requests others to use the terms Lisp1 and Lisp2, as opposed to other terms people used when discussing this issue, such as “Common Lisp way” or “Scheme way”.

Lisp does not use the term “name space” differently from other languages. The meaning is the same, only the implementation details are different.

Namespace is a context for names. For example, you might have a function named “CreateSound”, but this could be just defined in your file, or it could be imported from a library, and another library could also have a function called “CreateSound”. This is when the namespace concept becomes useful. For those curious, the “space” part of the terminology came from mathematics, e.g. vector space, topological space. It effectively means a “set”. The reason that “space” and “set” are often synonymous is due to the study of geometry, because a space is a set of points.

The namespace concept is tightly connected with the language's library/package/module system. In emacs lisp, PHP, and Scheme Lisp, for examples, these languages actually do not have a namespace mechanism, so that you have to create unique names for every function or variable you write. (Note: PHP v5.3 (2008-10) and Scheme R6RS (2007-08) both introduced namespace mechanism but isn't in common use yet) In most other langs (e.g. Perl, Python, Mathematica), there are namespace mechanisms, so that you could call a function in some library using full path, or that you could call that function without full path by “importing” the function's name into your current file's namespace.

However, having a namespace system still does not solve the problem of unique naming. That is, someone could write a library named Sound with a function named CreateSound, but someone else could also have written a library and function of exactly the same name. So, when 2 programs use CreateSound, even though their source code may be identical, they might behave very differently. (this is a problem for example in emacs, where lots of people have written a module, all identically named (e.g. javascript-mode), so that loading joe's version overwrites mary's version. Or, they using creative and less intuitive names to avoid name conflicts (e.g. a mode for javascript has been named javascript-mode, js-mode, js2-mode, and there's xml-mode vs nxml-mode, html-mode vs html-helper-mode, perl-mode vs cperl-mode, irc vs erc, ... etc.))

Java solved the uniqueness problem by adopting the domain name system for library names, as a convention. So, if Joe writes a package, by convention he should name his package something like “com.joe-bloke.sound”. This domain name system has been adapted in few other language systems too (e.g. XML namespace).

With regards to lisp, in emacs lisp for example, the lack of namespace is a major problem, and is also a major problem of Scheme Lisp ( Scheme Lisp spec 6 tried to fix this problem but not without huge controversy of the design). And as far as i understand from hearsay of gossips here, it is also a major problem for Common Lisp.

The above is a brief introduction on the concept of namespace as it is used, and some of its issues.

Now, the jargon lisp1 and lisp2 refers to a concept entirely different from namespace. It has to do with what lisp calls a symbol's “function cell” or “value cell” (note: as is given by Kent's article title “Technical Issues of Separation in Function Cells and Value Cells”). What these function cell and value cell basically means the value of lisp language's var/function/subroutine/class. (Note: var/function/subroutine/class are typically called a “name” in computer languages but in lisp is called “symbol”.) The “function cell” and “value cell” is effectively a “meaning space” (or “value space”).

Jargons And High Level Languages

Perm url with updates: http://xahlee.org/emacs/jargons_high_level_lang.html

Jargons And High Level Languages

Xah Lee, 2008-01-09

[This essay is originally posted to comp.lang.lisp in discussion of a ideal high level language.]

Adding to Ray Dillinger and Joh Harrop's post about a ideal functional lang system, my wishes are:

The language will be absolutely high-level, meaning in particular:

• The language's documentation, will not need to have mentioning any of the following words: pointer, reference, memory allocation, stacks, hash, cons cells, linked list, circular list.

• The language will not have concept of binary bits, bit operator, bytes, etc. (see the optimization section below) (However, the language can (and should) support computing with arbitrary number basis, and when the basis is 2, the compiler should of course automatically map it to bits on a chip as its method to achieve speed)

• The language's computational model should be simple, high-level, mathematics based as possible, as opposed to being a model of some abstract machine, implementation, or computational model (e.g. OOP, Lisp Machine). In particular: the language will not have concepts of “reference” or “object” as in Java, or “object” as in lisp. (Here, few of today's language that qualify includes: Mathematica, PHP, Javascript)

The language's syntax and semantics should be as much as consistent and regular as possible. For example lisp's syntax would be considered not acceptable here. (it has “' # ;” and various ad hoc irregularities) As another example, Common Lisp's multi-meaning names would be considered ah hoc irregular semantics here. (i.e. a symbol can be a variable as well as a function).

• The language's variables must not have types. Its values can have types. And, these types must be mathematics, not computer engineering byproducts. Specifically: The language will not have any concepts or terms of: float, double, int, long, etc. (it can, however, have concepts like exact number, n-bit approximate number aka machine-precision number (See below on optimization declaration).)

• The numbers in the language can have the following types: integer, rational, real, complex number. (and possibly extension from these, such as algebraic number, etc.)

• The language will not use extraneous jargons that came from computer-engineering. (these jargons in part exist as a side-effect of human animal's power struggle in academia) More specifically, the language will not have anything named “lambda”. (name it “Function” or simply “Subroutine”) It should not have anything named or documented as “Currying”. (when appropriate, explain with precision such as: applying a function to a function, or decompose multi-valued function, etc as appropriate. If necessary, invent new terms that is communicative: e.g. Function Arity Reduction Mechanism) The lang should not have a name or doc about tail-recursion. (if necessary, use something like this: “the compiler does automatic optimization of recursions when ...” ). Similarly, it should not use or even mention jargons of “closure”, “higher order function”, “first class citizen”, “hygienic macro”, “referential transparency”, ... etc. (The language, as a ideal high-level language, should, however, support these features by default by a simple principle of consistency and regularity of semantics (and syntax). It should not have the need to mention or discuss them in anyway, in particular not using these terminologies.)

The language can have many implementation/compiling necessitated/related concepts. (which are exemplified in the above occurring in other langs which we forbid here) However, these concepts, must be designed as special declaration constructs, whose purpose to the programer is _clearly_ understood as instructions/hints for the compiler that will create fast code, yet has no purpose what-so-ever as far as his program is concerned. So, for example, let's say the typical concept of “int” in most langs. In our ideal lang, it would be like this:

myNumber= 123
DeclareForOptimization(myNumber, (number, (8-bits,no-decimals)))

For another example, suppose you need a hash table. But in our high level concept there's no such stupid computer-engineering concept, nor such a stupid term “hash”. However, because we are great mathematicians, we know that compilers cannot in theory 100% determine user's use of lists for the purpose of optimization. e.g. hash table. So, the lang will have a usage-declaration for the purpose of compiling/optimization, something like this:

myKeydList= ((k1 v1) (k2 v2) ...)
DeclareForOptimization(myKeydList,
  (list, ((fastLookup true),(beginNumOfSlots 4000),(extendBy, 500), ...))) 

Languages are Getting Higher Level

The above criterions are basically all satisfied by Mathematica, except the last item about a complete, systematic, declarative statements for the purpose of optimization/compiling.

In general, the above criterions are more and more satisfied by modern high-level languages, in particular PHP and Javascript. (except in places where they can't help, such as the widely-established int, long, double, bit operator, created (or propagated) by the unix/C fuckheads.)

Languages in the past 2 decades are already more and more observing the above criterions. Roughly in order of history and high-level-ness: C, C++, Java, Perl, Python, Ruby, Javascript, PHP.

2 major exceptions to this progression are lisp and Mathematica. Lisp is extremely high-level, however, due to it's 40-years-old age, inevitably today it has many socially un-fixable baggage. (such as the cons business) Mathematica, which was born about the same time Perl was, is simply well designed, by a certified genius, who happens to be a rich kid too to boot, and happens to be a not a tech-geeking moron and in fact is a gifted entrepreneur and started a exceedingly successful company, who's social contributions to the world has made revolutionary impact. (besides his scientific contributions, e.g. in physics.)

There is a common conception among lispers that modern langs are just more and more converging to lisp. No, they are no converging to no lisp. They are simply getting higher-level, which lisp is the grand-daddy of high-level, intelligent, computing and design.

Jargons and Sapir-Whorf

Many functional programers are acquainted with the idea, that language influence thought. (tech geekers will invariably invoke the Sapir-Whorf.) However, these average people (arbeit elite), do not have the independent thinking to realize that, terminology, or, the naming of things, has a major impact on the thing and the social aspects of the thing. More concretely, how is a thing/concept named, has major impact on education, communication, social adaption, and actual use of the thing/concept. As a actual example, many high-level functional languages are invented by academicians, who, in general, do not have much social-science related knowledge or understanding of its practicality. And, being high-powered computer engineers, are conditioned in their thought patterns around dense, stupid, computer jargons. And thus in their languages, these jargons permeate throughout the language. The practical effect is that, these languages, are perpetually swirling around these academic communities with a viral transmission so powerful that sucks in anyone who tried to touch/study/use the language into mumble jumble. (most exemplary: Scheme, Haskell) This basically seals the fate of these languages.

(of course, the one notable exception, is Mathematica. It is still to be seen, if Microsoft's f#'s team will avoid stupid jargons.)

Is Lisp's Objects Concept Necessary?

Perm url with updates: http://xahlee.org/emacs/lisps_objects.html

Is Lisp's Objects Concept Necessary?

Xah Lee, 2008-01-22

[The following are edited version of posts originally to comp.lang.lisp.]

I've been programing Mathematica since 1993, worked as intern at Wolfram Research Inc in 1995. And since have been coding Mathematica daily up to 1998, and on and off since then.

I started to learn emacs in around 1997, and use it daily, about 8 hours a day, ever since and still today.

I started to learn lisp in about 1997. Mostly, i readed word-for-word, 3 chapters of the book Structure And Interpretation Of Computer Programs by Harold Abelson et al. (there are a total of 5 chapters) And, i have readed maybe half of the Scheme lisp's lang spec “Revised(4) Report on the Algorithmic Language Scheme”. These are all during 1997-1998. However, at the time i've never really wrote any programs in lisp other than toy factorial function.

In 2005, i started to casually learn emacs lisp. Because i use emacs extensively and daily since 1998, thus emacs lisp is quite practical for me, and i have kept learning it and in fact have wrote quite a few small scale, but nevertheless practical, programs. (typically less than a thousand lines of code; for text processing tasks)


I was trying to write a emacs major mode and also writing a tutorial about it, in the past week. In this process, i find it quite hard. Writing a complete, robust, major mode for public consumption will require knowledge of:

  • mode hooks (basic concept)
  • font lock system (some detail)
  • font and faces infrastructure (basic concept)
  • buffer local variables (detail)
  • regex (detail)
  • keymaps (detail)
  • syntax table (detail)
  • abbreviation system (basics)
  • Emacs mode conventions (A LOT emacs and emacs lisp knowledge. This is not about “code formatting” or “where to place comments” conventions. But rather, what variables, functions you NEED to define and what type of values they must be so that your mode will work as expected by a emacs user.)

(the list gives a rough idea, not meant to be complete or exact in some way. (Elisp Manual: Major-Mode-Conventions)) (2009-01-23 Addendum: since, i have written a emacs mode tutorial, see: How To Write A Emacs Major Mode For Syntax Coloring.)

Some of the above are related to a editing programing system and are inherently inevitable (such as hooks, font system, buffer local var, abbrev system). Some are more related to the language. For example, keymaps and syntax table themselves are editing-programing-system related concepts, but the data themselves ultimately calls for understanding of lisp's char-table data type. Then, writing a mode will also need to understand “defvar”, “defcustom” and others, which ultimately involves good understanding of lisp's symbols, macros, forms, and evaluation model ...

In short, the study of writing a major mode forced me to start to read some of the lisp reference chapters that deals with the technical details of lisp language, such as chapters on “Symbols”, “Variables”, “Functions”.

What prompted me to write this post, is that yesterday i thought a bit about lisp's objects, why Mathematica doesn't have it, and realized:

Mathematica, being high level as it is, essentially merged the concept of read-syntax, print-syntax, and objects, into just one concept of “What you see is what you get” _Expressions_ (i.e. the code programer types). That is, the code is the object, the object is the code. In other words, there is no such “internal” concepts as objects. The code you type is called expressions, and they have meanings.

For example, let's say a list:

In lisp:
  (list 1 2 3)

Mathematica:
  List[1,2,3]

In lisp, we say that the text “(list 1 2 3)” is the read syntax for the lisp's list object. So, the lisp interpreter reads this text, and turns it into a list object. When the lisp system prints a list object, it prints “(1 2 3)”, which is the print syntax for list.

In Mathematica, there's no such steps of concept. The list “List[1,2,3]” is just a list as is. There is no such intermediate concept of objects. Everything in Mathematica is just a “expression”, meaning the textual code you type.

This is a high level beauty of Mathematica.

Now, one might think further, that if for a modern language to be high level (such as Mathematica, or to a lesser degree PHP, Javascript, Lisp), but if the language also needs to deal with general programing with tasks such as writing a compiler, operating system, device drivers (such as those usually done with “compiled langs” such as C, Java, Haskell and lisp too), then is lisp's internal concept “object” necessary in a language such as lisp?

Does anyone know?

(Note: typically all compiled lang have some behind-the-scene model of sorts and almost always idiosyncratic to itself. For example, Java has “objects” (which is very different from lisp's) and references. Another good example to illustrate the point are the terms “list”, “array”, “vector”, “sequence”, which engenders huge amount of confusion. Typically, in online lang forums, each language proponent sees and understands only their own version of these terms.)


2008-01-22

Sometimes its quite frustrating when discussing things in newsgroups.

As most know, newsgroups discussions are characterized by ignorance, massive confusion, one-upmanship, and in general a showcase of the loniness of the tech geekers in technological societies ...

But in this particular thread, a contributing factor is that most people don't know Mathematica to the degree that nobody ever seen one single line of code in Mathematica, and of the very few who have actually used Mathematica (maybe in their school days in the computer lab), most don't have the basic knowledge of programing it. (Mathematica 6 sells for over 2 thousands dollars i think (check their web to be sure), and before that version, it's over one thousand dollars, and i guess it is sold perhaps 10 or hundreds times than lisps sold by all lisp companies combined)

This doesn't mean that all newsgroup posters are morons, or that all messages are of low quality, or that they are not valuable. (many, perhaps as much as half of posts in comp.lang.* groups, are highly technical, and highly valuable to the questioner or participants) But what i'm trying to say here, is that a significant portion of messages, esp in debate or opinion oriented discussions (which comprise perhaps 50% of posts), most of them missed the mark completely. (as in this thread) Often resulting in divisions and debates into technical details and facts.

In this thread, my question is about the necessity of lisp's object concept. Most simply don't understand this question. This question, certainly isn't a typical computer language designing question. (the bag of problems the typical language designer have, esp compiler experts, are exemplified by recent post of Ray Dillinger (comp.lang.lisp, 2008-01-09, “Re: Java as a first language "considered harmful"” (http://groups.google.com/group/comp.lang.lisp/msg/365cb58ce1b65914)))

What do i mean by “necessity of lisp's object concept”? I thought i gave sufficient explanation in my post, but so far nobody actually addressed it squarely. The closest is Rainer Joswig, who gave a valuable and technical examination of lisp's objects and why it is necessary. (i still need to digest its technicalities detail) Though in my opinion it does not address this problem head on, because it is addressed from a lisp expert's point of view with not much consideration of computer languages in general, and who is also a ardent lisp advocate.

To help understand the problem and related issues, perhaps we can ask a series of concrete questions that are either a paraphrase of the original question or very much related to it.

• Does a lisp variant or descendant, such Logo, Dylan, (and perhaps Tcl), all intimately require its programer to have the notion of objects? And, how close or remote, is their notion of object differ than lisp's object? Do they actually use the term “object”? (note here that the word “object” is not to be confused with the term “object” in the lang's Object-Oriented Programing facility. The “object” in our context is certain “thing” or “entity” type of concept that are required understanding for the programer to be proficient in using the language. (even almost all langs has certain “internal” or “behind-the-scenes” concept that are exposed to the programer, particularly “compiled languages”, but as far as i know they do not all necessarily all require the programer to understand a notion of a “entity” that represent all things/objects/items/living-units of the language. (some will just speak of data types)))

• Consider one of emacs lisp, common lisp, or scheme lisp. Suppose we make purely documentation changes, such that all mention of the object is deleted. In their place, if necessary, the manual talks about expressions instead, or, use wordings so that all discussion of lisp objects is replaced by speaking about particular instance of its print syntax. (in other words, do it like Mathematica does) What is the practical effect of this? Is this reasonably doable?

• Does languages like Haskell, Clean, Erlang, F Sharp, has a concept of language's running unit a la lisp's objects?


A few notes about the idea of a “behind the scenes model” of a language.

First of all, all computer languages as we know it, necessarily has a textual representation (for simplicity of discussion, we exclude here GUI based languages (e.g. Prograph), databases, spreadsheets, cellular automata, etc.) Now, since all these languages runs on a silicon chip based hardware (we exclude exotic or theoretical hardwares such as quantum, DNA, optical etc), inevitably there is a concept of “process” when the program runs, and this process may be further broken down to as to be constructed out of abstract entities that corresponds to the textual elements of the language the programer types. So, it is very silly, to answer to the effect that “yes lisp's object is necessary because there is a non-textual hardware it runs on”.

Another point, perhaps obvious, to note is that many languages, especially high-level ones such as Mathematica, PHP, Javascript, do not have such concepts of entities (a la lisp's objects). Also note, as stated in my first post, that many other languages, does have certain “behind the scenes” language model that the programer need to understand before becoming proficient with the language. For example, Java has its own “objects” and “references”. Perl has “references”, “file handles”. C has pointers ...

I think another important thing we need to realize is that what constitute the word “necessity” in the phrase “the necessity of lisp's object's concept”. For example, in almost all languages, a programer can have a working knowledge, perhaps employed as a junior programer, without understanding the “behind the scenes” concept of these languages. In fact, i assert that one can program in lisp and write large scale or significant number of software, without actually having a good understanding that lisp's expressions are compiled into entities called lisp objects. (certainly i have programed rather a lot useful functions in emacs lisp before getting a clear idea of the terminology “lisp object” used throughout its manual) For another example, in Java, i venture to say that majority of professional java programers (“majority” means greater than 50%; “professional” means those who makes a living primarily by coding), do NOT understand any of the behind-the-scenes java language model. (specifically, what i mean by java's “behind-the-scenes java language model” are those you acquire when you read the Java Language Specification, but many of its terminology and concepts is also used throughout the Java manual (aka the Java API) without much explanation)

Given a computer language, to what extent it has “behind the scenes language model” concepts, and to what degree that understanding this concept is necessary in being practically proficient in the language. I think these are all relevant questions.

(I think this is a extremely important realization, that modern languages are progressively getting more higher level and they all have less and less any “behind the scenes language model” concepts. Or, put in somewhat equivalent way, a programer can be highly proficient without needing to understand the language's technical specification. This “behind the scenes language model” concept is intimately tied to compiler/speed concepts that are exposed to the programer in the language. (this i've detailed somewhat in my essay “jargons and high level languages”))


I do not really know what's so-called “denotational semantics” by computer “scientists”. However, for the past decade, i always inferred that the term denotational semantics describes Mathematica. Basically, the gist of the idea to me is that there's no “behind the scenes language model” that exists for just about every other language and each idiosyncratic and totally mutually incompatible. The word “denotational”, means to me that the language works by notations, denote its meaning by notations, which fits my concept of what Mathematica is. In other words, a language that has denotational semantics just means to me that it is like a live, computable system of mathematics as a _formal system_. (for those of you don't know, a “formal system” in the context of mathematics, and in our context of programing languages, is a language whose computation model is based on string transformation or similar notational model, and all meanings of the language are completely represented by just the symbol sequences (no “behind the scenes language model” stuff). (another way to easy understand this for people who never touched term-rewriting based systems is to think of programing in a language that understand strings and transformation of strings but nothing else. (no numbers, for example)))

(i'd have answered many of my questions if i knew Haskell well... but never did)


2008-01-23

Ken Tilton wrote:

If I say (eq (list 1)(list 1)) in Lisp I get nil, aka, nope, they are not EQ, EQ being the test for object identity. (They happen also not to be EQL, just EQUAL (the one that digs into structure)) What happens in Mathematica?

In[5]:=
  Equal[List[1],List[1]]

Out[5]=
  True

This is the beauty of mathematica. Simple, easy to understand. What you see is what you get. No behind the scenes stuff.

The advantage to me in Lisp is that I can have two lists starting with the same textual expression and they will, well, be two different lists, not one.

This is exactly the point of confusing, behind-the-scenes lang model, i mean. Depending on the language, they can get quite convoluted. (in particular, Java, with its access specifiers, constructors, objects, arrays, references etc.)

I can mutate one without mutating the other, holding references to each in different places, and then see different lists when those two places are compared, or (the flip side) see “the same” list even when the contents of the list have changed. ie, I might have two Lisp “places” bound to the same list and want changes to be visible regardless of the binding.

Note here that Mathematica language programing doesn't have any concepts of memories, pointers, or references in any way. So, this means, you can't like change one entity and expect some other entity will also be changed mysteriously or magically.

Remember, everything is textual, “what u see is what get”, thinking it as a formal mathematics system. (formal means FORM-al. (think of lisp's concept of “forms” as in critical in its macros))

Kenny wrote:

Mind you, this all comes down to the question of state. Does Mathematica have that concept, or is it purely functional? (not that I am conversant with FPL either). If not, sure all list[1,2,3]s can be collapsed into one, no need for object identity, there it is, list[1,2,3]. But in Lisp we can have two variables bound to the same list:

(setf x (setf y (list 1 2 3))

...and then (delete 2 y) and see the result in x because x is “/the same/ list as y”, it is /not/ “the list of 1, 2, and 3”.

Is that necessary? It is what it is, which is different than Mathematica it seems, but I do not see an interesting issue here.

Excellent example to illustrate the issue. I'd pick on your characterization about this being stateful vs not-stateful. What the behind-the-scenes hard-linked kinda thing is basically modeled on a state machine kinda thinking. (actually, more just because it's easy to implement)

Mathematica does not use “behing the scenes” lang model but you can assign vars in Mathematica and change it later in your program. The Mathemtica programing culture is more like “whatever works”, in this sense a lot like Common Lisp as contrasted to Scheme lisp.

In Mathematica, if you want the behavior as of hardlinking 2 behind-the-scene things, just assign them to the same var... like this:

In[25]:= 
  data={1,2,3}; 
  x := data; 
  y := data; 
  
In[28]:= 
  data=Delete[data,2] 
  
Out[28]= 
  {1,3} 
  
In[29]:= 
  x 
  
Out[29]= 
  {1,3} 
  
In[30]:= 
  y 
  
Out[30]= 
  {1,3} 
  

The “:=” is syntax shortcut for “SetDelayed[]”, while “=” is “Set[]”. The difference, is that the Set evaluate the rhs before assignment, while SetDelayed evaluate the rhs only when it is called.

In my 2 decades of progaming experiences, with 5+ years of geometry programing and 5+ years in web application development and unix sys admin, expert at several languages, i don't think there is a instance where i needed 2 things to be linked in some hard way. And, whenever a lang support such a idea, it usually just hamper the use of the language. (one example is unix file system's hardlinks)

Basically, i think the concept of linking things is one of those easy-to-implement paradigm that just gets rooted in the early time of computing, when hardware (as well as software techs) are rather primitive. Over time, these types of paradigms gets shifted and eliminated. (e.g. dynamic/lexical scope, pointers vs so-called garbage-collection, primitive statics typing (a la C, Java) vs dynamic typing (lisp,perl,python,php, ruby, javascript ...on and on) or typing with inference (haskell) or no types and mathematical types (Mathematica))

People unfamiliar with ultra high level languages such as Mathematica (or perhaps haskell too), can understand this “what u see is what u get” by comparing to dynamic and lexical scope. Lexical scope, in a sense, is of the model “wysiwyg”, while dynamic can be likened to “behind the scenes happenings”. The behavior of Mathematica expression is wysiwyg, while lisp's expressions with its “objects” is “behind the scenes happenings”. Also, the “behind the scenes” style are often the brainless of the two to implement, that's why they are everywhere. But as i repeated quite a few times, as computer langs throughout the past 2 decades, we see a progress towards higher level with less and less these behind-the-scenese lang model kinda langs.

Kenny wrote:

No beauty either way, just different.

There is much beauty far beyond lisp, Kenny.

I get frustrated seeing how people truly are moronic in comparison to me, having no idea what i'm talking about while i fully understand what they are talking about. In the past few years i began to reckon that i'm just a genius, whose views are maybe 30 years into the future beyond the average computer scientists or industrial programers. (a bit scary to say this myself, i know.)

In this particular thread, i claim, that high level langs of the future is getting rid of the “behind the scenes” lang model, as i have repeatedly voiced it in this thread. When i posted my first post, i had some reservations, thinking that lisp's objects concept are perhaps necessary in some way for the lang's area of applications. Now having seen the many replies and have thought more about it, i more inclined to think it's simply another baggage of 1970s computing era, and probably can be eliminated without effecting the power of the lang in any way.


2008-01-24

Kenny wrote:

I do find things like C++ pretty scary in this respect, mostly because of that whacky syntax. :) I am just not sure mutable state is necessarily confusing. I suppose (but am prepared for rectification) you are OK with slots of structs and classes (does Mathematica have such things?)

struct as in C's “record” data type? Classes as in Java? Mathematica doesn't have these things. For Class, it depend on what you mean. As a language's data type, no. As a function that contains data ond inner functions one can implement in just about any well designed lang, yes.

Kenny wrote:

being mutable, such that a change to a slot of an instance is visible from any reference that might be held to that instance. And I surmise also you do not consider this magic nor even behind-the-scenes? If so (and again I understand you might not in fact agree so far), then all we have in Lisp is that our lists (and other things other than numbers and characters) also have object identity, so if there are multiple references to the same thing they all change together, like an in-memory database if you will.

As i repeated many times now, programing in Mathematica does not have any concept of memory, reference, pointer, WHATSOEVER. There is NO behind-the-scene memory, object, abstract storage location, where some other entities “reference” to it.

As i repeated many times, one can think of Mathematica as graph-rewriting, term-rewriting, symbol-sequence transformation, or a formal system. Where, the formal here means mathematical _formalism_ (lookup in wikipedia, folks!). Or, think of it as lisp's concept of “forms” that are used everywhere in emacs lisp documentation, it is essentially the principal concept one needs to understand for lisp's macros. So, practically speaking, one could think of Mathematica language as if it is programing in lisp using macros and nothing but macros. (So, if you think of this way, you'll see what i mean by no “behind the scene”, a wysiwyg model of language. I repeat, think of it as pure lisp macro and form programing. _Textual_. No “behind the scenes” stuff! WHAT YOU SEE IS WHAT YOU GET! _Textual_. No “behind the scenes” stuff! WHAT YOU SEE IS WHAT YOU GET! _Textual_. No “behind the scenes” stuff! WHAT YOU SEE IS WHAT YOU GET! _Textual_. No “behind the scenes” stuff! WHAT YOU SEE IS WHAT YOU GET!)

Of course, despite my profuse talking, i guess it is nigh impossible to convey the idea unless one actually start to program or spend a few hours in Mathematica. In analogy, imagine yourself trying to preach the gospel of lisp to Java programers. You can spend hours and few thousands words, on some concept that's so crystal clear and basic to you, but the Java guy wouldn't be able to comprehend it. (and if the Java person happens to be a hotshot or have years of industrial experience, or if you happen to not kiss their asses (why should you?), he'll deem you a fucking troll)

The thought has come to my mind to write a few pages of tutorial of mathematica for lisp programers. Written in a way such that every mathematica construct has a exact analogous counter part in lisp, in both syntax and semantics, so that the tutorial actually also functions as a fun-oriented lisp programing excursion. (in fact, if you are a lisp expert, you can consider yourself already half of a Mathematica expert. (lisp's major concepts of fully functional nested syntax, forms, macros, lambda, lists, function application are just about the same set of fundamental ideas in Mathematica, almost token for token identical except few syntax quirks or when we get into hairy stuff of evaluation model exhibited in lisp as “quote”, “eval” ... or in Mathematica as Hold[], HoldForm[], Evaluate[], ReleaseHold[]...) For any seasoned lisper, if you actually just spend 1 month studying Mathematica, you can boast 1 year of Mathematica experience in your resume. And if you code Mathematica for 1 year, you can boast being a Mathematica expert. (in this regard of Lisp/Mathematica, its the same with Perl/PHP, Scheme/Common Lisp, and probably similar with Java/C#, due to, their intimacies))

In many of my post about lisp here in 2007, often i have hoped that my discussion may lead to the discourse of hows and technicalities of a writing lisp extension that emulates Mathematica. (am ambivalent in leading into this direction since if successful, will piss off Stephen Wolfram (robbing his business), possibly entail legal issues, and i myself am not ready to gratuitously offer my services to lisp community given the way lisper MORONS reacted to my criticisms and amidst the fucking militant OpenSource collectivism ambiance)

Lisp can actually relatively easily do it, emulating the full Mathematica's systematic list/tree extraction/manipulation functions (getting rids of the cons problem for good), as well getting rid of the lisp “behind the scenes” object concept with some practical benefits, a full automatic code formatter (beyond emacs current offerings) that get rid of the endless manual “how you should indent parens” once for all (as in the benefit transparently offered by Python), and also layer on top of sexp a infix notation while maintaining absolutely the full sexp benefits, and even more, if lisp experts are interested in spending now quite a lot energy, go as far as making lisp a full 2D Mathematical notation display system, and a notebook system (think of it as a language feature so powerful that the SOURCE CODE actually transparenly functions like a Word Processor file format that can contain structures and images).

But before any of the above discussions can sprout, it is already shot down by the army of sensitive and clueless lisper morons. Guarding their ways in high anxiety, dismissing the FREQUENT and PERSISTANT criticisms of lisp (many of these criticisms are from renowned lisp luminaries), and kept thinking somehow that a language designed 40 years ago are destined to be perpetually perfect. (meanwhile grudgingly witnessing facts that relative fucking stupid langs like Perl, Python, PHP, Java, Ruby, born and rose beyond lisp's combined applications in its entire history) Many old lispers still fancy that lisp is the only unique, all-powerful, all-living creature that rule all languages.

This is a social and psychological problem of lisp mainly caused by the old timers. Besides fixing problems of the lisp the language proper, other problems and possible solutions such as frequent complain of its libraries are similarly thwarted and smiten.

Whenever there's a new lisp, such as dylan or the vapourwear arc, there's a high sentiment of hate-love brouhaha. All Common lispers are anxious and giddy, throwing remarks of excitment and belittlement intertwined. The Scheme Lispers live in a fantasy world of their own, simultaneously wishing for popularity and aloofness, culminating in the R6RS fuckup in late 2007. (if my chances of becoming a Common Lisper anytime soon is 1%, my chances to become a Scheme lisper is 0.1%, by sheer will.)


Kenny wrote:

Now if I am wrong and you do find mutable state in general confusing, well, I guess many agree with you, namely the entire FP community. And even I agree: the functional paradigm is a huge win for transparency and program correctness. I just never enslave my self to good principles like FP, I always reserve the right to do what I want, shucks, even use tagbody-go in moments of weakness. :)

KENNY!! IT'S NOT A QUESTION OF MUTABILITY! Mutability — another computer “scientist-moron” jargon. What you mean by mutability? As i have repeatedly explained in different contexts in posts here, opaque jargons begets confusion. By mutability, do you mean ultimately the ability to reference to the same memory address exhibited in lisp as lisp objects? By mutability do you mean Haskell's prevention of a variable being assigned twice?

As i tried to explain in previous message... regarding Mathematica, if by mutability you mean pointers/references/behind-the-scene-objects, then No, mathematica doesn't have any of such notions. If by mutability you mean whether a variable can have its values changed, then yes, Mathematica do.

The “mutable/mutability/immutable” is another fucking jargon that refuse to die precisely because it's a fucking moronic imprecise and gaudy jargon. It has different meanings as applied to data types, languages, language models, computational models. Every tech geeker latches onto it their own conception and brandish it to no ends.

Note here, as i have also repeated in different posts in the past, that among Mathematica's over 1 thousand pages of manual, containing fucking over 1 thousand build-in functions, with maybe a few hundred advanced mathematical functions that only a few handful mathematicians in the world understands their meaning, and containing all the fucking fancy constructs exhibited by jargons such as lambda, HOF (higher order function), fucking first class citizens, fucking fuck you “closures”, mutabilatilabilalilaty, M'm'mmmmacros, the over 1 thousand pages of Mathematica manual doesn't mention any of these fucking shit terms and is extremely easy to read and understand. (i happened to have read the over 1 thousand pages of mathematica manual word for word from cover to cover 3 times during 1990s in different years)

Kenny wrote:

Well, the c.l.l savages always ridicule me on this, but one /does/ have to master the cons to do well with Lisp. And I guess you are right, many a noob (all?) trips over Lisp lists early on. Hell, I have always maintained Lisp lists should be explained this way: there is no such thing as a list, there are just conses with cars and cdrs.

Exactly. When will these lispers die?

When?


In[25]:= 
   data={1,2,3}; 
   x := data; 
   y := data; 

In[28]:= 
   data=Delete[data,2] 
Out[28]= 
   {1,3} 

In[29]:= 
   x 
Out[29]= 
   {1,3} 

In[30]:= 
   y 
Out[30]= 
   {1,3} 

Kenny wrote:

Can I go the other way, Delete[x,2] and see it back in data?

No, not in the above. Remember, in Mathematica, everything is textual, or copies. All functions return copies. There's no such thing as some “object” that lives in some ether space that if changed effects others entities “pointing/referencing” to it. There is no concept of reference/pointer/memory whatsoever. Again, think of programing only using lisp's macros. It's all about form changing. What you see is what you get. WYSIWYG.

Kenny wrote:

Or has x become like a symbol-macro for data?

Yes. If i understand you correctly, yes.

Xah wrote: 「The “:=” is syntax shortcut for “SetDelayed[]”, while “=” is “Set[]”. The difference, is that the Set evaluate the rhs before assignment, while SetDelayed evaluate the rhs only when it is called.」

Kenny wrote:

And once it is called? What if you had evaluated x /first/ and seen 1,2,3 and /then/ deleted 2. Would a second evaluation of x still return 1,2,3? If it returns 1,2, then := in a sense gives one the same thing as object identity, except (and this may be an advantage, if I want to achieve this behavior I must remember to use :=. ie, if I now want to have z refer to the same list, I must say z := x.)

Yes.

Again, if a programer actually wants the behavior of certain things linked together, you can do it just as one can in any well-designed lang. (For example, in lisp, you can program the hard link behavior without resorting to lisp's default objects concepts) The concept of linked behavior as exhibited by langs that come with the “behind-the-scenes” “language model” is essentially having the variables defined in terms of another variable, since the hard link concept is basically just references pointing to the same memory address.

I'd be happy to show how it can be done in Mathematica if you provide lisp code of certain linked behavior you want.


The thing to understand here is high-level-ness. It seems to me, programers accustomed to compiled languages as well as lispers, have a extremely difficult time understanding the concept of computer language as a textual interface to computation. In particular, compiler writers or those acquainted with compiler knowledge, are spectacularly incapable in understanding this because their brain has been permanently damaged by low-level exposure. (in this thread, that Kaz Kylheku guy is the best example, voicing his dips and jumpers mumble jumble into this thread. (if we switch the underlying hardware to DNA or quantum qubits, his dips and jumpers goes haywire))

Kenny wrote:

But if we cannot mutate lists we lose expressive power. Lisp programs that always copy structure cannot scale beyond toy size. So mutable state is not there just to confuse people, it extends the language (so, yes, there is more to be learned).

I object your use of the term mutable here as discussed above. However, if i understand you correctly... here's the issue.

My issue here, is the complete separation of language as a interface to computation, and elements in the language as hints/declarations/helpers for the current technology of compilers to achieve speed or certain low level issues (such as reference to memory, number size in bits), which are necessary when the language is used in certain applications (such as writing a OS, compiler, device driver) that by definition needs to have direct access to hardware elements.

As i've tried to explain above, the behavior of linked things (of which you are calling it “mutability” here), is essentially having 2 vars pointing to another var that holds the value. As i tried to explain, this can be easily done in any well-designed lang, without having the language having a behind-the-scenes reference/pointer/object model. (how exactly it is done depends on the language, of course) As stated before, if you give a code example in lisp that shows some linked behavior, i can provide the some behavior coded in Mathematica, or i can even try to code the same behavior in emacs lisp but implemented without utilizing what you'd call the “mutability” concept (what i'd call behind-the-scenes language model, or pointers/references/“memory addr”/“lisp's object”.).

In yet another way to see this, is to focus on the _behavior_ of computation as a function of the language's textual input (the source code). Put it in yet another way, any of the idiosyncratic behind-the-scenes language model of Lisp/Java/C are all having no mathematically meaningful utility. They are by-products of computer engineering constrained by today's hardware, often necessary due to practical, speed issues. (and in the case of C and Java or most imperative langs, it's mostly because it's brainless to implement)

Kenny wrote:

OK, but if one thinks of dynamic scope as up the stack instead of behind the scenes, it is not so scary. Mind you, this means using special variables with discipline, only ever binding them in let clauses.

Nah.

First of all, please don't use the word stack. I don't know what the fuck is a stack, and i know it is mathematically meaningless. What i do know, is that one can insert ping pong balls into a pussy, and the last one inserted will come out first. Now, that is mathematics.

Dynamic scope is another stupidity, just like the behind-the-scenes things i've spoke above, that it has no meaningful mathematical value. Its existence, is just a by-product of computer engineering constrained by computer hardware (in this case, the hardware of few decades ago). As a “proof” of this, you do realize that dynamic scope basically disappeared in langs of the past 2 decades.

Of course, as a review of computing history, we note that the behavior of the fortuitous “dynamic scope” of the past is actually desirable sometimes. And, it can be easily implemented in any well designed langs, without the lang itself having a behind-the-scene stuff. Exactly how this can be implemented on top depends on the lang of course.

Again, let me emphasize, the issue here, is a clear separation of what is mathematically meaningful in a language that specifies computation, from what's practical, speed, or hardware issues. If one wants to design a language that's not just high-level like mathematica, but also as speedy as C, then as i have wrote long esasy before, this should be done by having special declarative/hint constructs that clearly indicate to user of the language that such is for the compiler, and having nothing to do with his computation.

Kenny wrote:

I wish I knew more about FP, it sounds like you would prefer that to the extra mental housekeeping we Lispers take for granted (and sometimes screw up).

I think i'm even aversive to the use of the jargon Functional Programming. It smacks of certain elitism and generates confusion. So-called Functional programing practically is just to define your subroutine so it doesn't call or set global vars. Of course, in a more strict sense, it entails all that lambda, currying, recursion, higher-order-functions, function sequencing aka filters or streaming, function composition, “referential transparencies”, monads (whatever that is) ... and all that good shit.

... now a bit rambling about jargons:

Jargons, first of all, we should understand, are terms that arise in particular field by its professionals, for the practical purpose of communication. More specifically, the need is usually as a shorthand of communicating complex ideas, and a way to make the idea precise. As i have said profusely in my various essays, jargons, being a part of language, which is a social element of human animals, it also serve a social function of class differentiation (think of how you dress represent who you are; and your diction and accent gives out who you are; and human animals in general go out of their way to dress/speak in hope to give impressions, hide, or to emphatically be themselves). Jargons, as in this social role, tend to become abused, in the sense that often jargons are thrown by people without actually intending to use it as its original, professional purpose of ease in communicating specialized ideas. More specifically, jargons are thrown by people without understanding what it means nor intend to discuss any issue. (see the many literary, highbrow words i used in this post? They are not really there for the purpose of clear communication. They are there, to tell my detractors, that they are fucking morons in comparison to me in the department of english writing. Too.)

Now, a jargon, or the naming of things, has a quality aspect. That is, as a professional in some field, we can judge, the quality of a jargon according to the degree the term communicates its meaning or by other criterion such as terseness, familiarity, religious/cultural issues, and so on. For example, there are standardized jargons in almost all professional fields with published tomes, and many times how a new concept or thing is named is much debated for its merits before it is actually standardized or adopted by some standardizing organization in the field. (think of jargons in law, physics, mathematics, chemistry, astronomy, biology, psychology, literature, linguistics ... and in general any science. (try to come up with examples of jargons in each of these field yourself))

Now, let's consider the jargon “functional programing” a bit. The jargon “functional programing”, in my opinion, is a jargon of good quality. It communicates its meaning quite well, based on the similarity of math concept of function. Not too long, culturally ok, not offensive ..., using words that are commonly understood (sometimes this is a good thing, sometimes not) ...

Now, let's consider the jargon “functional programing” in the context of social, class differentiation context. When the jargon Funtional Programing is abused, in what way it is abused? what are some of the particular effects or ramifications of its abuse? I don't have some elaborate analysis of this... but one observation i can think of, is that it confounds the very essence of the idea functional programing, namely that of coding subroutines such that it doesn't use global vars. So, especially when in tech geekers group as in comp.lang.*, it often gives the impression to laymen (imperative programers) that it is something very complex, involving a bunch of “advanced” and esoteric concepts of math. Essentially, distorting its essential meaning and generates non-understanding for something that is otherwise trivial.

(again, i note here that in the Mathematica language's one-thousand pages of manual, it does not emphasize or give the reader any impression of the jargon “functional programing” at all. (am not sure if the term is actually found in the manual other than in a intro section showing briefly how Mathematica can do various styles of programing) This is a extreme beauty and savviness of Stephen Wolfram. The effect of this proper technical writing, avoids the problem of programers sensationally discuss things that is divisive of programing (i.e. writing specifications for a desired computation). In contrast, the worst shit of abuse of jargon and its damaging effects happen in Haskell, Scheme, Perl, Python documentations. Perl and Python docs are in different class of stupidity than the Haskell and Scheme. Haskell and Scheme people actually understand the meaning of the jargons they use, and they used it properly, only not understanding its social aspects. While, the Perl and Python morons do not understand the technical aspects of the jargons they use, but also so fucking stupid to litter them profusely in their docs)

Kenny wrote:

No, mutable lists rock and are one of the great bits of genius behind Lisp, the idea that the singly-linked list is an incredibly versatile data structure. To have that and all the list-processing functions just sitting there at ones fingertips while dashing off little algorithms is simply huge, explaining Greenspun's Tenth (the replication of Lisp in other HLLs).

O my god Kenny. By now i hope you understand my idea of the separation of computer language issues of (1) language elements that have do to with constructing computation and algorithms. (2) language elements that are by-products of computer engineering constrained by hardware technologies; mathematically valueless but necessarily anyway for practical, speed, or hardware-device-controlling aspects.

If you understand the above, you'll see that the “linked list” belongs to the second category. In other words, it is a mathematically meaningless concept as far as computation is concerned.

And, if you understand the above, then the linked list as exhibited in lisp's cons cells, is detrimental and hampers the programer's job of using the language to specify computations. To you, a lisper, you might not see this, and think cons being great because the lang's technical details is ingrained in your mind. But let me put to you a mirror: suppose i present to you the idiosyncratic, behind-the-scene, language model thingamajig used in Java or C or C++ language, you probably are very confounded by them, and find them quite a fetter and distraction to specifying computations you have in your mind.


In the above, i have expressed the idea of seperation of mathematicaly meaningful elements of a computer language, from elements in the language that are necessitated by hardware and speed constraints.

Stephen Wolfram — a certified genius — resorts to not words but deeds and created Mathematica some 20 years ago, based on this priciple. (i do not know if he explicitly thought in this way, nor, whether this is the dominant priciple he had in mind when creating Mathematica. However, it is my ascription, that Mathematica the lang is a lang based on the above priciple. Furhter, Stephen is fully aware of this priciple and this is exhibited in a lot places in his writings. (side note: there are other langs based on this principle, also, i'm not the only person who have this idea. This idea of a lang being a pure tool for specification of computation without computer-engineering-hardware issues, is certainly not new. However, it is probably the case that 99% of computer scientist who work on languages, have no idea of it.))

I hope i have made this idea clear. A recent essay on this is at: Jargons And High Level Languages

(and read the related essays linked at the bottom if you want to know more)

Is there any lispers who are still very confused and don't know what i'm talking about? Raise your hands please, so i can kick you in the ass.

Kenny wrote:

If it helps, what I notice is the character-orientation of your summary. eg, What happens if my last sentence instead was, “That summary is character-oriented?” You and I just disappeared, and the chance of interpersonal conflict diminishes. On those rare occasions when I am not actively making trouble here, I sometimes go back and rewrite so all the pronouns disappear. Fascinating exercise.

...

You worried above about being a genius, here's something you might like. I got the idea from a bumper sticker, so you know it's good: “If you think you can do anything, try sailing.” Made relevant, nothing is harder or more rewarding for folks like us than getting along with others. So if you think you are a genius, Xah, figure out how to get along with folks on Usenet. I'll give you a headstart from your roots: “Win without fighting.”

Thank you for the kind gesture.

Though, the question here, is whether i'm willing.

Further, the underlying assumption is hostility and benevolence. In general, i'm probably one of the most loving person here. I quote:

«The best index to a person's character is (a) how he treats people who can't do him any good, and (b) how he treats people who can't fight back.» —Abigail Van Buren)

(Compare, the loving, caring, moralists and mother-fuck-faces such as George W Bush, who caused the death of 40 thousand to 260 thousand people)

Also, consider in a smaller scale of those “benign” and “OpenSource-morally-good” tech geekers who holds power. Namely, those ops/admins/“first-click”-creators in IRCs, blogs, mailing lists, online forums, newsgroups... many have banned/kicked and harassed me (one of them in a legal way), some i'd guess wish to do me harm because they think i'm a “troll”.

(perhaps i should note here, publicly for the first time, that i was banished in freenode's irc emacs channel in 2006 (and still banned to this day), by a FSF-abiding (and employee), morality-sensitive, vegetarian, fuckface one John Sullivan (aka johnsu01), despite that i'm perhaps one of the most helpful member in that channel at the time (and i offered verifiable proof on this assertion) For detail, see the link at: Emacs Related Essays)

Kenny wrote:

Ironically, that is hard for you because you are /too/ social an animal. The sociopath feels no connection to others and so can charm them into doing anything. Your thin skin (a great metaphor, btw) leaves you no way to keep people out, no way to shrug off those whose e-company you do not enjoy. In a sense, by reacting strongly to others you make their problems your problem, in that you cannot be comfortable as long as they are jerks. Not a recipe for contentment. There are billions of people on this planet, put your energy into those you enjoy, not so much into those you do not. When your will weakens, well, hey, what's a killfile for? :)

Yeah... i appreciate your comments. I like writing. I used to despise writing when i was 20ish (~1990), thinking it a second-rate activity that should be allotted to writers as opposed to mathematicians. (apparently, this is not a uncommon thought among scientists, among which is Einstein) But since about maybe mid 1990s, i find that i enjoy writings. I guess there are several reasons i can trace back. One is my habit of reading dictionaries. I, perhaps, have checked English dictionary entries more number of times than all persons ever visited comp.lang.lisp in their lifetimes since the existence of comp.lang.lisp, combined. (unless, one of them happens to be a lexicographer) (or, altertively: i looked up dictionary entries more than any one from 99% of persons of who have a PHD in literature or English) Secondly, my study of logic has made me practiced extensive writing in a most austere, logical style possible, during mid 1990s. This happenstance also trained me greatly in critical thinking and made me aware of many deep problems and levels of logic and philosophy, esp in communication and English expressions. (and i have since come to fucking despise grammarians and English writing pedants (because i find their ideas and teachings moronic))

Another comment to your above paragraph, is the concept of living by hatred. You have seen Star Wars right? In Star War, there's the Siths, who live and thrive by hatred. Y'know, there's saying among them, that the hatred leads to power. I just found the following verse on the web:

   The Sith Philosophy 

   Fear leads to anger. 
   Anger leads to hate. 
   Hatred leads to power. 
   Power leads to victory. 
   Let your anger flow through you. 
   Your hate will make you strong. 
   True power is only achieved through testing the limits of one's anger, passing through unscathed. 
   Rage channeled through anger is unstoppable. 
   The dark side of the Force offers unimaginable power. 
   The dark side is stronger than the light. 
   The weak deserve their fate.

Speaking of hatred, one has to wonder, whence does hatred origin? According to the above, fear leads to anger, anger leads to hate, but that's just a theatrical composition. Fear is inherent in every human animal, in particular, the fearing of other human animals, but fear doesn't necessarily leads to anger, and fear isn't the dominant emotion among a human animal's emotions. So, at this point, the verse's logic of the origin of hatred breaks down. After all, it is theatrical. Now, back in reality, hatred nevertheless usually has a cause. You don't hate unless someone, something, made you.

The US American culture, in general believes in some pure form of evil (such as Sadam Hussein). This can be seen in their comic book stories of superheros they are brought up with, where the story line can almost always be characterized as a clearly divided forces of good vs evil, and the bad guys usually take the form of some pure dark evil lord. It is also cultivated from their God-Believing-Sect's scriptures of the concept of devil. (contrast this to the Greek mythologies, where gods or mortals, good deeds and bad deeds, are all complex and human, and nearly impossible to say who's the “bad guy”.)

Although i live by love and knowledge, but I thrive by hatred. Hatred gives I a reason to live. Hatred gives me hope. Hatred enpowers me. It is hatred, indignation, defiance, that drove me to a quest for knowledge. It is hatred, of motherfucking lisping _idiots_ with PHD tattood to their faces, that drove the production of this essay that explicates a view of high-level language i long had. (will be archived on my website soon)

A couple more quotations for the road:

Few people are capable of expressing with equanimity opinions which differ from the prejudices of their social environment. Most people are not even capable of forming such opinions. — Albert Einstein, 1954

It takes considerable knowledge just to realize the extent of your own ignorance. —Thomas Sowell

2008-01-13

Iraq War Deserters

Am reading: AWOL↗, a military jargon meaning Absent Without (Official) Leave, more or less a euphemism for “deserters”. Then, Wikipedia has this passage about the number of US deserters in Iraq War:



«According to the Pentagon, more than 5500 military personnel deserted in 2003–2004, following the Iraq invasion and occupation. [2]. The number had reached about 8000 by the first quarter of 2006. [3] Another report stated that since 2000, about 40,000 troops from all branches of the military have deserted, also according to the Pentagon. More than half of these served in the US Army [4]. Almost all of these soldiers deserted within the USA. There has only been one reported case of a desertion in Iraq. The Army, Navy and Air Force reported 7,978 desertions in 2001, compared with 3,456 in 2005. The Marine Corps showed 1,603 Marines in desertion status in 2001. That had declined by 148 in 2005. [5]»

2008-01-11

modernization of emacs lisp

Perm url:
http://xahlee.org/emacs/modernization_of_elisp.html

Modernization of Emacs Lisp

Xah Lee, 2008-01

Over the past decade, before i really got my hands down with Emacs lisp in the last couple of years, i have heard many times of wishes and projects that want to replace elisp with Common Lisp or Scheme lisp. At those times with my ignorance of deep elisp, i wished alone the same line, blithely hoping that one day we'll have a emacs with a better lisp (without me needing to put anything in). (this is the “i want to believe” syndrome typical with Free Software Foundation and OpenSource youngsters) However, in the past 2 years i studied elisp, and chanced to actually spend maybe 30 minutes about this issue of modernizing elisp and looked at some websites about such projects that use Scheme lisp or Common Lisp. I think now that these efforts are not as important as they seem.

Benefits With A New Lisp In Emacs

To create a emacs with Common Lisp or Scheme Lisp as extension language, is not going to work as one might thought. Because, the basis of such wish is that a programer would just learn one language (and a better one at that) and not to have to learn one for X lisp and one for Emacs lisp. This line of thinking is reasonable, but has hidden, unexpected flaws. Because, to create such a extension language in a text-processing system, you necessarily have to need a lot concepts and data types extraneous to the language. Namely, emacs's buffers, point, frame, kill-ring, keymaps, syntax table, fontification data structure, ...etc. So, even if one day we have a emacs with Scheme Lisp as the extension lang, but the deeper fact is that it will actually have perhaps hundreds, of functions inherent in the system that has nothing to do with Scheme the lang or general programing. (and, there will be needs or complexities, to separate what's editing-related function from the language's core functions) So, the final effect is that, to write any program for text processing or editor-extension for the New-Lisp Emac, the programer is really, factually, spending most of his time learning about the working of these functions in this text-processing system, even if he is already a Scheme expert. In short, given a Scheme Emacs (or Common Lisp emacs), it will not make that much of a difference as with current Elisp Emacs, with regards to the effort of writing code for emacs.

There will be, a major practical impact however, if such system come into being and widely used. And that is, the benefits of a unification in general sense of that word (imagine Scheme and Common lisp are unified; or GNU Emacs and Xemacs; South Korea and North Korea, etc.). A more practical benefit is a pre-packaged, coherent, IDE for Common lisp or Scheme lisp development. (imagine Visual Studio with C++ or Visual Basic; Xcode and Objective-C; Java with Eclipse or NetBeans.)

Technical Advantage

The fact a somewhat technically better lisp (CL, Scheme) replacing elisp, has very little practical benefits for a emacs extension language, even when we consider just the technical aspects. It is certainly true, that Common Lisp and Scheme Lisp are technically better than Emacs Lisp, in different ways. For example, elisp uses dynamic scope. Elisp has a datatype that is both character and integer. Elisp does not support exact arithmetics. These defects are not present in CL or Scheme. And, CL supports multi-threads, whereas elisp has single thread. (practically meaning that emacs users must wait for each operation to finish) CL has many tried-and-true libraries. Scheme provide the same language power with a simpler, cleaner language.

However, there are also considerable technical disadvantage. For example, CL is fret with baggage and ugliness, with its voluminous language specification. Scheme does not have a library system (same situation as elisp) untill the R6RS specification in late 2007, and fully conformal implementation are scarce even for R5RS. Both CL and Scheme does not universally support unicode, and while in emacs lisp a infrastructure exist that is already employed to support full unicode for all practical purposes.

When we survey most uses of the extension language in emacs, they are predominantly text processing related tasks, with some other uses such as embedding a text-based client for ftp, irc, mail, web browser, OS command line interface, file manager. The benefits of fixing elisp's weaknesses by CL or Scheme for small-scale programing as these are not major, and is offset by the various technical problems they introduce.

Effort Required

It is a fact that the effort needed to create such as system basically hampered all such projects (although such system with CL and Scheme already exists (and to my knowledge, usable to some extent).). And, the need to advertise such a system to get people to switch from existing system (emacs) is nigh impossible. The 2 facts that the effort needed for such system and the effort needed to cover legacy elisp so people will switch, basically puts a death knell to such projects (or, long, long time down the road, maybe 5, 10 years, if without commercial incentive. (and with such long gestation, in today's technological speeding, i do not think lisp lang itself (in particular, CL, Scheme, or anything with the cons business), are suitable or competitive as a high-level general lang of the 2010s)).

Note: This essay is not meant to discourage the development of a new lisp emacs, especially because they already exist. This essay is meant to put off some common wishful thinking. As noted before, a major benefit of a emacs with Common Lisp or Scheme Lisp is that they provide a unified, packaged, system for development of these languages, and since it uses the same language to extend functionality of the editor, it will transparently increase the number of developers for the editor as well as the language. Here are the home pages of the most prominent on-going efforts:

Hemlock (editor)↗ (Hemlock; Common Lisp)
http://common-lisp.net/project/climacs/ (Climacs; Common Lisp)
http://www-swiss.ai.mit.edu/projects/scheme/documentation/user_8.html (Edwin; Scheme)
(Thanks to Rainer Joswig for pointing out Common Lisp's multi-thread advantage, and pointing out Hemlock.)

2008-01-02

Emacs lisp Lesson: Hash Table

Emacs lisp Lesson: Hash Table

Xah Lee, 2008-01

This page shows you how to use hash table in emacs lisp. If you don't know elisp, first take a gander at Emacs Lisp Basics.

(for HTML version with colors and links, see:
http://xahlee.org/emacs/elisp_hash_table.html
)

---------------------------------------------
What Are Hash Tables

Often, you need to have a list with elements being key-value pairs. For example, the key may be a person's name, and the value can be their age. Or, in a more complex case, the key can be the person's ID number, and the value can be a data of name, age, sex, phone number, address, etc. You might have thousands of such entries, and you want to be able to know that, given a ID, find out whether it exists in your data. You also want to be able to add or delete, modify entries in the data.

From a interface point of view, such a keyed list looks like this: “((key1 value1) (key2 value2) (key3 value3) ...)”. If you want to know a particular key exist, you can write a loop thru the list to test if any first element of a element matches your given key. Similarly, you can add a entry by appending to the list, or delete and modify the list using your language's list-manipulation operators or functions. All this functionality is provided in elisp, called Association List.

Reference: Elisp Manual: Association-Lists.

However, this method will be too slow to the point of unusable if your have thousands of entries and frequently need to check, add, delete, or modify the data. This is because most languages are not smart enough to analyze your use of list to determine how best to compile it in the language. The solution provided in most high-level languages of today is to provide a particular data type (called a hash table), so that when you want to use a list of huge number of pairs with fast lookup, you use that data type.

For example, here's how hash table looks like in some high-level languages:

# perl
%mydata = ("mary"=> 4, "jane"=> 5, "vicky"=>7);

# PHP
$mydata = array("mary"=> 4, "jane"=> 5, "vicky"=> 7);

# Python
mydata = {"mary":4, "jane":5, "vicky":7}

(see Keyed List in Python and Perl, PHP: Keyed List) Similarly, elisp provides a hash table datatype.

Note: Hash table is also known as hash, keyed-list, dictionary, map. It's called hash table for historical reasons. Basically, a technique to create fast-lookup of a data, is by using a so-called hash-function, that converts the key into a unique (randomish) number. So that, instead of going thru the huge list to check every item, you just check one particular memory address to see it exists. Such a function is called “hash-function” because its key-conversion operation is analogous to processing a type of food dish made by disorderly and irreversibly chopping up and mixing up veggies or meat. (See Hash function↗, Hash (food)↗, Hash browns↗. )

(If your data needs more entries than that, the proper solution is to use a Relational Database↗.)

---------------------------------------------
Hash Table In Elisp

To create a hash table, use “(make-hash-table :test 'equal)”.

Here's a complete code that shows how to creat a hash table, then add, change, remove, entries:

(let (myhash val)

;; create a hash table
(setq myhash (make-hash-table :test 'equal))

;; add entries
(puthash "mary" "19" myhash)
(puthash "jane" "20" myhash)
(puthash "carrie" "17" myhash)
(puthash "liz" "21" myhash)

;; modify a entry's value
(puthash "jane" "16" myhash)

;; remove a entry
(remhash "liz" myhash)

;; get a entry's value
(setq val (gethash "jane" myhash))

(message val) ; print it
)

In the above example, both the entry's keys are datatype “strings”. However, elisp's hash table can have any lisp object as the key or value. When we create a hash table, we used “:test 'equal” to let elisp know that the function “equal” should be used when testing whether a key exists. If your keys are other lisp data type, you need to specify which equality functon emacs should use to test if a key exist. The available value for “:test” are “'eql”, “'eq”, “'equal”. (You can also define your own equality test. See elisp manual for detail. (linked at bottom of this page))

To clear all entries in a hash table, use “(clrhash «myhash»)”.

-------------------
Check If A Key Exists

To check if a key exists, use the “(gethash «mykey» «myhash»)”. If it exists, its value is returned, else “nil” is returned. Example:

(let (myhash)
(setq myhash (make-hash-table :test 'equal))
(puthash 'mary "19" myhash)

(gethash 'vicky myhash) ; returns nil
)

“gethash” has a optional third parameter “default”. If the key does not exist, “default” is returned.

-------------------
Apply A Function To All Key-Value Pairs

To apply a function to all entries in a hash table, use “(maphash «myfun» «myhashtable»)”. The function “myfun” must take 2 arguments, key and value. For example, if you want a list of all keys, you can write a function that puts the key into a list, then print out the list. We show several examples below.

-------------------
Get All Keys

To get all keys into a list, we can use a maphash with a function that pulls the keys into a list. Here's a example:

;; example of getting all keys from a hash table
(let (myhash allkeys)

;; add items to the hash
(setq myhash (make-hash-table :test 'equal))
(puthash "mary" "19" myhash)
(puthash "jane" "20" myhash)
(puthash "carrie" "17" myhash)
(puthash "liz" "21" myhash)

;; set to a empty list
(setq allkeys '())

(defun pullkeys (kk vv)
"prepend the key “kk” to the list “allkeys”"
(setq allkeys (cons kk allkeys))
)

;; get all the keys
(maphash 'pullkeys myhash)

;; sort and print it out
(sort allkeys 'string<)
)

In the above example, “pullkeys” is used only once. Since it is used only once, we don't need to define it separately, but just use lambda construct. So, the maphash line can be just like this: “(maphash (lambda (kk vv) (setq allkeys (cons kk allkeys))) myhash)”.

You can define a “keys” function that returns all keys of a given hash table. Like this:

(defun keys (hashtable)
"Return all keys in hashtable."
(let (allkeys)
(maphash (lambda (kk vv) (setq allkeys (cons kk allkeys))) hashtable)
allkeys
)
)

With this function, you can use it like this: “(keys myhash)”.

-------------------
Get All Values

Getting all values is just like getting all keys above. Here's a function that does it.

(defun hash-values (hashtable)
"Return all keys in hashtable."
(let (allvals)
(maphash (lambda (kk vv) (setq allvals (cons vv allvals))) hashtable)
allvals
)
)

-------------------
Print All Entries Sorted By Key

Here's a example how you can print out all entries in a hash table by sorted key.

First, we define a function that turns a hash table into a list.

(defun hash-to-list (hashtable)
"Return a list that represent the hashtable."
(let (mylist)
(maphash (lambda (kk vv) (setq mylist (cons (list kk vv) mylist))) hashtable)
mylist
)
)

;; example of turning a hash table into list then sort it
(let (myhash mylist)

;; add items to the hash
(setq myhash (make-hash-table :test 'equal))
(puthash "mary" "19" myhash)
(puthash "jane" "20" myhash)
(puthash "carrie" "17" myhash)
(puthash "liz" "21" myhash)

;; get the hash table into a list
(setq mylist (hash-to-list myhash))

;; sort and print it out
(sort mylist (lambda (a b) (string< (car a) (car b))))
)

;; prints (("carrie" "17") ("jane" "20") ("liz" "21") ("mary" "19"))


-------------------
Hash Table With Lisp Objects

Elisp's hash table's key or value can be any lisp object. Also, you can define your own equality test, as well as providing few other parameters when creating a hash table. For detail, see elisp manual.

Reference: Elisp Manual: Hash-Tables.

Emacs is beautiful!

Xah
xah@xahlee.org
∑ http://xahlee.org/