2009年7月30日星期四
vanilla内核安装卸载
按我老师的说法是:
Well, Fedora can and can't use a Vanilla kernel. The Fedora kernel is a
vanilla kernel with a bunch of patches on it. If you try to use a vanilla
kernel on fedora, it may work fine -- or you may run into odd problems that
have to do with some fedora user-land applications expecting the kernel
patches to be in there. I think most distributions apply their own custom
patches to the kernel. Debian may be an exception, I am not sure.
我折腾了半天把那个vanilla kernel装上去了。开不了机,只好使用grub换到原来的内核。下面还要把vanilla kernel删掉
wget -c http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.24.tar.bz2
tar xjf linux-2.6.24.tar.bz2
cd linux-2.6.24
make defconfig
make all
su -c "make modules_install install"
这个好像只能手动删除
/usr/src/linux-2.6.29
/lib/modules/2.6.29
/boot/vmlinuz-2.6.29 or kernel-2.6.29
/boot/initrd-2.6.29
/boot/System.map-2.6.29
就算完事了。。。还得去弄那个src.rpm包。。。命苦阿。。。
2009年7月26日星期日
Linux字体大讨论
然后一群大牛就开始了他们对Linux中文字体的讨论:
---------------------------------------------------------------------------------
最佳方案:
中文显示最清晰的 Vista 雅黑字体,英文默认使用的是开发人员评分最高的 Consolas 字体,这两种字体搭配在一起格外漂亮。另外还安装了 XP
下的 simsun(宋体和新宋体),以满足 office 软件的需求。
------------------------------------------------------------------------------------------------------------
自由字体推荐用文泉驛,普通的和等宽的都不错,个人觉得微米黑比正黑美观,但是字符集不全。宋体的话用cjkuni-uming-fonts,台湾明体比大陆宋体好看。
----------------------------------------------------------------------------------
LiHeiPro+Monaco也是一种不错的组合
图见 http://www.linuxsir.org/bbs/post1953624-3.html
http://www.linuxsir.org/bbs/post1953637-9.html
----------------------------------------------------------------------------------
如果是想偷懒的,在Fedora 11用文泉驿正黑都是一个不错的选择,比在Fedora 10又有进步了。
话说Fedora上的正黑比Ubuntu上的要好很多,在老爸上的Ubuntu,总感觉是文泉驿隶书字体..
----------------------------------------------------------------------------------
文泉驿、微软雅黑或者华文丽黑
----------------------------------------------------------------------------------
2009年7月21日星期二
Linux笔记之重启关机
Linux文件系统把内存中变化的部分存到磁盘中的次数比较少,这个设计让Linux的磁盘读取更加快速,但同时如果强关电脑也使文件系统更加容易丢掉数据。传统的UNIX&Linux系统对关机方式非常敏感,现在因为伟大的灰常鲁棒的ext3fs,ext4fs的发明变得稍微好点了:)但大家还是对她温柔一点,否则很容易导致一些隐蔽的问题。
对于一个商业化的操作系统,重启似乎是解决很多问题的最好方法,但Linux出问题你最好先动脑筋想想再说,因为Linux出的问题一般更加微妙,盲目重启一般来说都解决不了什么问题。
下面列一些你需要重启的情况:
添加了一个新硬件;已使用的硬件出了问题;启动时使用的配置文件被更该;或者死机了。
2.关机
关机或重启更上华山还是有点不一样的,关机方式花样翻新,包你满意。
1)切电源
2)使用shutdown命令
3)使用halt和reboot命令
4)使用telinit改变init运行等级
5)使用poweroff命令
一个个讨论一下,
1)最直接的方法:)容易丢失数据
2)最优雅的方式,shutdown命令可以加参数,还可以发送消息。比如shutdown -h 09:30 "Going down for scheduled maintainenance"。当shutdown命令即将发生的时候,用户无法登录主机但可以看到这条信息。shutdown可以使用参数 -P(power off) -h(halt) -r(reboot) -F(强制fsck) -f(不fsck),一般文件系统是正常卸载的话fsck就被跳过去了。
3)halt要讲的就是-n参数,一般的halt会调用sync将内存中的内容写到磁盘里面,但是一但调用-n那就sync不再被调用,这样数据可能就会丢失。
Linux笔记之进程二
信号是进程级别的中断请求。大概有30种左右,主要使用方法如下:
*可以作为进程之间的通信
*可以通过终端发送来杀死,中断,挂起进程,<Ctrl-C>&<Ctrl-Z>
*可以使用终端命令kill发送杀死进程
*当进程产生违规操作时,由内核发出。
*当子进程死亡或者IO通道上有可利用的数据。
收到信号可能会发生两种事情:1)如果收到信号的进程有处理信号的机制,就被预定的程序处理(C里面就有<signal.h>头文件处理)。2)内核代表进程做一些默认的操作。
程序可以通过忽略或者阻塞信号防止。忽略的信号就直接消失了,但是阻塞的信号会入队等候处理。不管你阻塞了多少信号,显式取消阻塞后只能调用一次信号。
# Name Description Default Can catch? Can block?
1 HUP Hangup Terminate Y Y
2 INT Interrupt Terminate Y Y
3 QUIT Quit Terminate Y Y
9 KILL Kill Terminate N N
a BUS Bus error Terminate Y Y
11 SEGV Segmentation error Terminate Y Y
15 TERM Software termination Terminate Y Y
a STOP Stop Stop N N
a TSTP Keyboard stop Stop Y Y
a CONT Continue after stop Ignore Y N
a WINCH Window changed Ignore Y Y
a USR1 User-defined Terminate Y Y
a USR2 User-defined Terminate Y Y
#a 根据硬件架构会不同
二、Kill指令
kill [-signal] pid
kill pid(可以被忽略获取,忽略,阻塞)
kill -KILL pid (保证杀死进程)
三、进程状态
runnable 进程正在运行
sleeping 进程正在等待资源
zombie 进程快死了
stopped 进程被挂起
四、nice&renice参数
nice表示这个进程有多少nice[-20,+19],越nice说明进程的运行优先级越低。
五、ps进程监测
ps [aux/lax]
六、top高级进程监测
ps只能监测一个瞬间,top可以持续的监测系统。使用时按q键,可以选择监视参数。
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Linux笔记之进程
Linux进程貌似是一个相当复杂的系统,每一个进程就是一个运行的程序的抽象。通过他可以管理和监视程序使用的内存,CPU时间,输入输出资源。
一个进程包含地址空间以及一系列的内置于内核的数据结构。
地址空间就是一些被内核标记为归进程使用的内存页(pages,内存单元PC机上为4k),里面包含进程执行的代码和库,进程的变量,进程使用的栈,以及其他进程运行时内核需要的额外信息。因为Linux是虚拟内存系统,因此这些地址到底是在物理内存中还是swap分区中并没有关系。
内核内置的数据结构每个进程的一些信息,一些比较重要的列举如下:
*进程的地址空间映射
*进程当前的状态(sleeping,stopped,runnable等)
*进程运行的优先级
*进程所使用资源的信息
*进程所打开的文件以及使用的网络端口号
*进程的信号标志(信号被阻塞记录)
*进程的所有者
一些进程共享这些属性产生了所谓的进程群(thread group),这个是Linux类似传统的UNIX系统的多进程机制。虽然这些进程共享地址空间,但是这些进程群中的进程都有自己的执行优先级和执行状态。其实很少有程序使用多进程机制来运行程序。
进程有很多参数都会影响程序的运行,比如:他所得到的处理器时间,他能读取的文件。这些参数有:PID,PPID,UID&EUID,GID&EGID
1)PID:process ID number
内核给每个进程都分配一个ID,很多命令和系统的调用都会用到。
2)PPID:parent PID
Linux系统中并不会通过系统命令产生一个新的进程,而是一个已经存在的进程通过复制自己产生。这个复制的新进程然后,改变当前的属性运行另外一个程序。这样原来的那个进程就相当于该进程的父亲,这里用PPID表示。
PPID在处理一个未知进程的时候比较有用,因为有其父必有其子么。
3)UID&EUID:real and effective user ID
UID就是创建该进程的用户的编号,更精确的说应该是其父进程的UID。EUID标识号负责标识以什么用户身份来给新创建的进程赋所有权、检查文件的存取权限和检查通过系统调用kill向进程发送软中断信号的许可权限。UID&EUID其实差不多的,用两个就是为了把进程本身和权限操作分开来。这样setuid程序就不用经常去更改权限了。
另外还有一个FSUID控制文件系统的访问权限,这个在内核外一般很少使用。
4)GID&EGID
GID用来标识一个群的进程。EGID作用和EUID差不多的。
2.进程的生命周期
一个进程通过fork的系统调用复制自己产生一个新的进程,这个新进程跟原来的进程非常相像,但是他们有不同的PID。fork有一个非常独特的性质就是他返回两个不同的值,对于子进程他将返回0,而对于父进程返回子进程的最新PID,因此通过检验两个值就可以区分那个是儿子,那个是老子。之后,子进程就会通过系统调用之其中一种exec运行新的程序。这个调用会初始化所有的数据,栈。不同形式的exec只不过在指令的参数,运行环境上有一些不同。
Linux操作系统用一种另外的和fork不同的创建的方式。就是调用clone产生一个共享内存或I/O的进程,这个多用于产生多进程程序的产生。
系统启动是第一个创建的进程就是init,这个进程负责运行所有的启动脚本,同时这个进程还负责管理进程的任务。当一个进程结束的时候,就会调用 _exit通知内核他已经死了。还会返回一个数字告诉他是怎么死的(程序退出的原因)。。。一般用0表示正常退出。在把这个进程火化之前,Linux会用 wait通知他的父母。如果不是该进程自己找死,父母会通过(exit code)知道孩子是怎么被杀死的。同样还可以知道他孩子的财产总数(用的资源:))
这个设计在wait调用正常的时运作的非常棒。但如果该进程的父母已经去世了或者其他原因wait没反应,那么就把该进程的孩子作为孤儿归init养(这不是林妹妹么)。
linux笔记之启动二
1.设置计算机名
2.设置时区
3.使用fsck检查文件系统
4.挂载系统磁盘
5.将/tmp中老文件干掉
6.配置网络接口
7.启动守护进程和网络服务
init定义了7个运行级别(run level):
1.Level 0 关机状态
2.Level 1 或称 S 表示单用户模式
3.Level 2-5 表示多用户级别
4.Level 6 表示重启层
一般多用户都在2,3层,第4层一般不用,第五层被X windows的启动进程使用,例如xdm。传说Linux支持10层,但7-9层并未被定义。
2009年7月20日星期一
Linux 中文字体文泉驿
开始没注意有这么个东西,弄得课程设计上面的字大大小小的。好东西,替人宣传一下。
官方网站 http://wenq.org/
作为几千年中华文明的见证,浩瀚传统文化传承的载体,汉字是让我们每一个中国人引以为豪的东方文明标志之一。我们的祖先创造汉字,书写汉字,利用汉字和汉
语的无穷魅力创造出让人叹为观止的文学,艺术。而今天进入了计算机时代的我们,虽然不再象古人一样手持毛笔,批著简帛,但我们的生活仍然无时无刻离不开汉
字。
可以毫不夸张的讲,汉字①是世界上最为复杂和庞大的符号系统之一。早在殷商时期,我们的先人就创造出了数目巨大的甲骨文,从发掘出来的上
万片甲骨中整理出来的单字就有四千余个,而东汉许慎编撰的“说文解字”,收录汉字
9,353(一说10,516)个。至清朝康熙年间由段玉裁等人收集整理的“康熙字典”收录汉字竟达 47,035
之多。加上少数民族文字,各种古代典籍上曾经出现但并未广泛使用的古汉字和异体字,汉字总数多达十万以上。
我们是一群致力于在计算机世界中推广汉字,丰富电子汉字资源的志愿者,我们希望通过自己无私的劳动,使得无论你在世界上任何一个角落,都可以免费地获得我们的电子汉字资源,能够流畅地通过汉字进行交流。“文泉驿”是以上述目标为宗旨而自发创建的非盈利性组织。
对于制作电子汉字资源,文泉驿制定了如下子计划:
- 基于开放源代码②的点阵汉字库(优化屏幕显示)
- 基于开放源代码的矢量汉字库
- 基于开放源代码的汉字笔画笔序数据库
- 基于开放源代码的汉字图像识别系统
- 基于开放源代码的汉字笔画识别系统
- 基于开放源代码的汉字信息系统(解释,编码等)
- 开放的非官方电子汉字标准
具体来讲,我们希望完成覆盖 Unicode 4.0
所包含的7万多汉字的点阵位图(9pt,10pt,11pt,12pt等),笔画笔顺数据库以及基于笔画笔顺数据库生成的不同字型(细宋,中宋,报宋等)
的矢量汉字库。这个工作还可以继续扩展到对所有汉字进行注音,释义,通过笔画笔顺进行汉字检索和分类,少数民族语言文字的点阵和矢量字体,以及汉字图像识
别和笔画识别算法,软件的开发。
The Rise of "Worse is Better''
I and just about every designer of Common Lisp and CLOS has had extreme exposure to the MIT/Stanford style of design. The essence of this style can be captured by the phrase ``the right thing.'' To such a designer it is important to get all of the following characteristics right:
* Simplicity-the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation.
* Correctness-the design must be correct in all observable aspects. Incorrectness is simply not allowed.
* Consistency-the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness.
* Completeness-the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness.
I believe most people would agree that these are good characteristics. I will call the use of this philosophy of design the ``MIT approach.'' Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation.
The worse-is-better philosophy is only slightly different:
* Simplicity-the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.
* Correctness-the design must be correct in all observable aspects. It is slightly better to be simple than correct.
* Consistency-the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency.
* Completeness-the design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.
Early Unix and C are examples of the use of this school of design, and I will call the use of this design strategy the ``New Jersey approach.'' I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach.
However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach.
Let me start out by retelling a story that shows that the MIT/New-Jersey distinction is valid and that proponents of each philosophy actually believe their philosophy is better.
Two famous people, one from MIT and another from Berkeley (but working on Unix) once met to discuss operating system issues. The person from MIT was knowledgeable about ITS (the MIT AI Lab operating system) and had been reading the Unix sources. He was interested in how Unix solved the PC loser-ing problem. The PC loser-ing problem occurs when a user program invokes a system routine to perform a lengthy operation that might have significant state, such as IO buffers. If an interrupt occurs during the operation, the state of the user program must be saved. Because the invocation of the system routine is usually a single instruction, the PC of the user program does not adequately capture the state of the process. The system routine must either back out or press forward. The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that resumption of the user program after the interrupt, for example, re-enters the system routine. It is called ``PC loser-ing'' because the PC is being coerced into ``loser mode,'' where ``loser'' is the affectionate name for ``user'' at MIT.
The MIT guy did not see any code that handled this case and asked the New Jersey guy how the problem was handled. The New Jersey guy said that the Unix folks were aware of the problem, but the solution was for the system routine to always finish, but sometimes an error code would be returned that signaled that the system routine had failed to complete its action. A correct user program, then, had to check the error code to determine whether to simply try the system routine again. The MIT guy did not like this solution because it was not the right thing.
The New Jersey guy said that the Unix solution was right because the design philosophy of Unix was simplicity and that the right thing was too complex. Besides, programmers could easily insert this extra test and loop. The MIT guy pointed out that the implementation was simple but the interface to the functionality was complex. The New Jersey guy said that the right tradeoff has been selected in Unix-namely, implementation simplicity was more important than interface simplicity.
The MIT guy then muttered that sometimes it takes a tough man to make a tender chicken, but the New Jersey guy didn't understand (I'm not sure I do either).
Now I want to argue that worse-is-better is better. C is a programming language designed for writing Unix, and it was designed using the New Jersey approach. C is therefore a language for which it is easy to write a decent compiler, and it requires the programmer to write text that is easy for the compiler to interpret. Some have called C a fancy assembly language. Both early Unix and C compilers had simple structures, are easy to port, require few machine resources to run, and provide about 50%--80% of what you want from an operating system and programming language.
Half the computers that exist at any point are worse than median (smaller or slower). Unix and C work fine on them. The worse-is-better philosophy means that implementation simplicity has highest priority, which means Unix and C are easy to port on such machines. Therefore, one expects that if the 50% functionality Unix and C support is satisfactory, they will start to appear everywhere. And they have, haven't they?
Unix and C are the ultimate computer viruses.
A further benefit of the worse-is-better philosophy is that the programmer is conditioned to sacrifice some safety, convenience, and hassle to get good performance and modest resource use. Programs written using the New Jersey approach will work well both in small machines and large ones, and the code will be portable because it is written on top of a virus.
It is important to remember that the initial virus has to be basically good. If so, the viral spread is assured as long as it is portable. Once the virus has spread, there will be pressure to improve it, possibly by increasing its functionality closer to 90%, but users have already been conditioned to accept worse than the right thing. Therefore, the worse-is-better software first will gain acceptance, second will condition its users to expect less, and third will be improved to a point that is almost the right thing. In concrete terms, even though Lisp compilers in 1987 were about as good as C compilers, there are many more compiler experts who want to make C compilers better than want to make Lisp compilers better.
The good news is that in 1995 we will have a good operating system and programming language; the bad news is that they will be Unix and C++.
There is a final benefit to worse-is-better. Because a New Jersey language and system are not really powerful enough to build complex monolithic software, large systems must be designed to reuse components. Therefore, a tradition of integration springs up.
How does the right thing stack up? There are two basic scenarios: the ``big complex system scenario'' and the ``diamond-like jewel'' scenario.
The ``big complex system'' scenario goes like this:
First, the right thing needs to be designed. Then its implementation needs to be designed. Finally it is implemented. Because it is the right thing, it has nearly 100% of desired functionality, and implementation simplicity was never a concern so it takes a long time to implement. It is large and complex. It requires complex tools to use properly. The last 20% takes 80% of the effort, and so the right thing takes a long time to get out, and it only runs satisfactorily on the most sophisticated hardware.
The ``diamond-like jewel'' scenario goes like this:
The right thing takes forever to design, but it is quite small at every point along the way. To implement it to run fast is either impossible or beyond the capabilities of most implementors.
The two scenarios correspond to Common Lisp and Scheme.
The first scenario is also the scenario for classic artificial intelligence software.
The right thing is frequently a monolithic piece of software, but for no reason other than that the right thing is often designed monolithically. That is, this characteristic is a happenstance.
The lesson to be learned from this is that it is often undesirable to go for the right thing first. It is better to get half of the right thing available so that it spreads like a virus. Once people are hooked on it, take the time to improve it to 90% of the right thing.
A wrong lesson is to take the parable literally and to conclude that C is the right vehicle for AI software. The 50% solution has to be basically right, and in this case it isn't.
But, one can conclude only that the Lisp community needs to seriously rethink its position on Lisp design. I will say more about this later.
Linux is obsolete
LINUX (not that I would have said much had I been around), but for what
it is worth, I have a couple of comments now.
As most of you know, for me MINIX is a hobby, something that I do in the
evening when I get bored writing books and there are no major wars,
revolutions, or senate hearings being televised live on CNN. My real
job is a professor and researcher in the area of operating systems.
As a result of my occupation, I think I know a bit about where operating
are going in the next decade or so. Two aspects stand out:
1. MICROKERNEL VS MONOLITHIC SYSTEM
Most older operating systems are monolithic, that is, the whole operating
system is a single a.out file that runs in 'kernel mode.' This binary
contains the process management, memory management, file system and the
rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360,
MULTICS, and many more.
The alternative is a microkernel-based system, in which most of the OS
runs as separate processes, mostly outside the kernel. They communicate
by message passing. The kernel's job is to handle the message passing,
interrupt handling, low-level process management, and possibly the I/O.
Examples of this design are the RC4000, Amoeba, Chorus, Mach, and the
not-yet-released Windows/NT.
While I could go into a long story here about the relative merits of the
two designs, suffice it to say that among the people who actually design
operating systems, the debate is essentially over. Microkernels have won.
The only real argument for monolithic systems was performance, and there
is now enough evidence showing that microkernel systems can be just as
fast as monolithic systems (e.g., Rick Rashid has published papers comparing
Mach 3.0 to monolithic systems) that it is now all over but the shoutin`.
MINIX is a microkernel-based system. The file system and memory management
are separate processes, running outside the kernel. The I/O drivers are
also separate processes (in the kernel, but only because the brain-dead
nature of the Intel CPUs makes that difficult to do otherwise). LINUX is
a monolithic style system. This is a giant step back into the 1970s.
That is like taking an existing, working C program and rewriting it in
BASIC. To me, writing a monolithic system in 1991 is a truly poor idea.
2. PORTABILITY
Once upon a time there was the 4004 CPU. When it grew up it became an
8008. Then it underwent plastic surgery and became the 8080. It begat
the 8086, which begat the 8088, which begat the 80286, which begat the
80386, which begat the 80486, and so on unto the N-th generation. In
the meantime, RISC chips happened, and some of them are running at over
100 MIPS. Speeds of 200 MIPS and more are likely in the coming years.
These things are not going to suddenly vanish. What is going to happen
is that they will gradually take over from the 80x86 line. They will
run old MS-DOS programs by interpreting the 80386 in software. (I even
wrote my own IBM PC simulator in C, which you can get by FTP from
ftp.cs.vu.nl = 192.31.231.42 in dir minix/simulator.) I think it is a
gross error to design an OS for any specific architecture, since that is
not going to be around all that long.
MINIX was designed to be reasonably portable, and has been ported from the
Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016.
LINUX is tied fairly closely to the 80x86. Not the way to go.
Don`t get me wrong, I am not unhappy with LINUX. It will get all the people
who want to turn MINIX in BSD UNIX off my back. But in all honesty, I would
suggest that people who want a **MODERN** "free" OS look around for a
microkernel-based, portable OS, like maybe GNU or something like that.
Andy Tanenbaum (a...@cs.vu.nl)
P.S. Just as a random aside, Amoeba has a UNIX emulator (running in user
space), but it is far from complete. If there are any people who would
like to work on that, please let me know. To run Amoeba you need a few 386s,
one of which needs 16M, and all of which need the WD Ethernet card.
2009年7月19日星期日
初识Lisp
Lisp的语法相当诡异,她使用前缀表达式,这一表达式在加强了其表述能力的同时,让许多人望而却步。Lisp已经有50年的历史了,但悲剧的是这门优雅强大的语言重来就没有成为主流语言过。国内连本Lisp的书都买不到。Lisp是大师的语言,高楼大厦都是大师所设计的;C是农民工语言,摩天大楼为其所建。只看完了sicp的第一节,就感觉了他的灵活性,以及强大的表达能力。
Lisp最近又有回升趋势,
让我们看看其他语言高手的评价吧!据资料显示,著名Java高手、Jolt大奖获奖作者Bruce Tate在2007年发表的《编程语言的王道:Lisp之美》中这样说到:
“Lisp
长久以来一直被视为伟大的编程语言之一。其漫长的发展过程(接近五十年)中引发的追随狂潮表明:这是一门非同凡响的语言。在 MIT,Lisp
在所有程序员的课程中占了举足轻重的地位。像 Paul Graham 那样的企业家们将 Lisp
卓越的生产力用作他们事业成功起步的推动力。但令其追随者懊恼万分的是,Lisp 从未成为主流编程语言。作为一名
Java程序员,如果您花一点时间研究 Lisp 这座被人遗忘的黄金之城,就会发现许多能够改进编码方式的技术。”
这位Java高手显然把Lisp当化石一样的研究。但事实上,我觉得Lisp更像是一个新诞生星云,在孕育未来的生命。Lisp让人更接近机器的思考,学习一门语言的同时就可以掌握解释器,编译器是如何工作的。SICP 也同时介绍了人类抽象事物的能力,将思维带到了另一个层次。下面我们看点实际的东西:
*Lisp支持内置的列表,这一点比java内置的vector成熟很多,python向他学的倒是比较像。
*自动内存管理,同样Lisp的表现比java,python更加优秀。
*支持动态类型
*支持匿名函数,这个别人都是向他学的,java匿名函数语法相当累赘。
*统一的语法,没有Lisp的程序员因为忘记语法而苦恼,Lisp帮你打破一切语法的糖衣。
在其他语言中,a+b+c+d, 而Lisp使用(+ a b c d),几乎所有的格式都是(函数名 <参数列表>)。
*交互环境,Lisp的交互环境让很多人认为他是一个纯解释语言,但其实不然。
*可扩展性,Lisp是一个programmalbe programming language :)
既然Lisp有比java python跟强的表达能力,那么Lisp的效率如何呢?
Test | Lisp | Java | Python | Perl | C++ | ||
---|---|---|---|---|---|---|---|
exception handling | 0.01 | 0.90 | 1.54 | 1.73 | 1.00 | ||
hash access | 1.06 | 3.23 | 4.01 | 1.85 | 1.00 | ||
sum numbers from file | 7.54 | 2.63 | 8.34 | 2.49 | 1.00 | 100+ x C++ | |
reverse lines | 1.61 | 1.22 | 1.38 | 1.25 | 1.00 | 50-100 x C++ | |
matrix multiplication | 3.30 | 8.90 | 278.00 | 226.00 | 1.00 | 10-50 x C++ | |
heapsort | 1.67 | 7.00 | 84.42 | 75.67 | 1.00 | 5-10 x C++ | |
array access | 1.75 | 6.83 | 141.08 | 127.25 | 1.00 | 1-5 x C++ | |
list processing | 0.93 | 20.47 | 20.33 | 11.27 | 1.00 | 0-1 x C++ | |
object instantiation | 1.32 | 2.39 | 49.11 | 89.21 | 1.00 | ||
word count | 0.73 | 4.61 | 2.57 | 1.64 | 1.00 | ||
25% to 75% | 0.93 to 1.67 | 2.63 to 7.00 | 2.57 to 84.42 | 1.73 to 89.21 | 1.00 to 1.00 |
从上表我们可以看到,Lisp几乎是Java的1.5到4倍,Python的10到50倍(当然Python还在不断进化当中),而她跟C++差不多。
说了这么多,我还是写几本打你推荐的Lisp书比较好:
Paul Graham's On Lisp and ANSI
Common Lisp
SICP(scheme), MIT PRESS
Lisp的运行环境:
scheme用MIT Scheme(Linux下有点问题), PLT-Scheme用的蛮好
Lisp用 cmucl, sbcl, clisp都比较好,gcl传说还不是ANSI标准,我Fedora下运行还是段错误。
2009年7月6日星期一
Linux笔记之启动&GRUB
Linux启动主要分以下5个阶段:
1.内核的载入和初始化;
2.设备的检测和配置;
3.产生内核进程;
4.执行系统启动脚本;
5.多用户操作。
下面具体介绍一下这5个阶段。
1.内核的初始化
Linux内核kernel本身也是一个程序,Linux启动的第一件事就是把这个程序添加到内存里并执行。而kernel一般放在/vmlinuz或者/boot/vmvlinuz处。引导过程主要分两步走:第一,ROM载入一个引导程序到内存里。这个程序将kernel载入到内存。接着,kernel负责内存检测RAM内核将保留一块内存为自己所用,这块内存将不能被任何用户级别的程序使用。
2.设备的检测
内核的第一个任务就是检测机器的环境,看看现在有些什么硬件啊,然后选择正确的驱动程序。当然这些必须在你编译内核的时候告诉他有哪些硬件。
3.内核的进程
初始化结束后,内核kernel将在用户空间产生几个自发生进程。
4.执行系统启动脚本(就是些一般的shell script)
5.多用户操作
执行完系统启动脚本后,系统就完全运作起来了,当然还没人登录。登录一定要被一个特定的终端接受,同时一个叫做getty的进程对其进行监听。这个进程由init直接产生,另外init还负责产生图形化登录界面xdm和gdm。
对于PC的启动有两个常用的引导程序:LILO和GRUB
LILO本来是传统的Linux引导程序,但是由于其不能很好的支持多启动,因此Red Hat ,SUSE, Fedora都将GRUB作为默认引导程序。
说一下GRUB(以下文字由ajaxhe提供)
----------------------------------------------------------------------------------------------------------------------
Fedora Linux采用GRUB作为引导程序。GRUB(Grand Unifed Bootloader,大一统引导管理器)是一个通用的引导程序,是用户能够在引导系统时选择启动的操作系统,或选择同一操作系统的不同版本。GRUB是目前应用最广的引导程序,许多Linux系统现均采用GRUB作为默认的引导程序。
GRUB提供三个功能强大的用户界面,每个界面都允许用户直接引导操作系统,在系统启动期间甚至可以在GRUB的三个界面之间切换。
第一个是菜单界面,采用GRUB的Linux系统都将GRUB菜单作为默认的引导界面。当安装完毕,计算机引导时,菜单界面就会出现在屏幕上,利用上下箭头选择需要引导的系统,按下回车键即可。如果长时间没有用户登录,GRUB会自动引导默认的操作系统。
第二个是菜单配置编辑界面,在引导操作系统的菜单中按下“e”键,便可以进入菜单编辑器。此时,可以临时性的修改操作系统引导菜单 。例如,按下“o”键便可以在当前行后面增加引导项;按下“O”键可以在当前行前面增加引导;使用“d”键可以删除引导项;使用“e”键可以编辑引导项等。修改后可以按回车键确认,或者按Esc键取消修改。
注: 在编辑操作系统引导菜单期间所作的任何修改都是临时性的 ,在下次引导时,这些修改回消失(如果想永久性地修改引导菜单,需要编辑grub.conf文件)。在测试新编译的系统内核时,这个界面比较有用。
第三个是命令行界面,在引导操作系统的菜单中按下“c”键,便可以进入命令行界面。命令行是GRUB最基本的界面,也是最基本的界面。
GRUB采用下列形式命名存储设备及其分区:
(
其中。“
“
“
在命名和引用设备于分区时,GRUB并不区分IDE硬盘和SCSI硬盘,所有的硬盘均以hd命名。当指定整个硬盘,且不考虑其分区时(如需要将GRUB安装到一个硬盘的主引导目录(MBR)时)只需要将逗号“,”和分号去掉即可。
--摘自《Fedora 8 Linux从入门到精通》 电子工业出版社 邢国庆 任永杰 张凯 编辑
--------------------------------------------------------------------------------------------------------------------------
因此如果要设置双启动,需进行如下配置:
打开/etc/grub.conf
Windows系统与UNIX系统不同:
title Windows XP
rootnoverify (hd0,0)
chainloader +1
chainloader表示从特定地方载入引导程序(这里是主硬盘第一分区的第一扇区)
rootnoverify 要这样看root-no-verify该选项保证grub不会试着去挂载特定的分区,这就避免了GRUB在读取其无法理解的分区时陷入泥潭。例如,ntfs格式的分区或者GRUB能够读取范围以外的分区。
(hd0,0)第一个数字值代表物理盘的编码,第二个数字表示分区号。(hd0,0)="/dev/hda1" XP必须挂在第0分区(鄙视你!)
例子:
default=0
timeout =0
splashimage=(hd0,2)/boot/grub/splash.xpm.gz
hiddenmenu
title Windows XP
rootnoverify (hd0,0)
title Red Hat
root (hd0,1)
kernel /boot/vmlinuz
title Fedora
root (hd0,2)
kernel /boot/vmlinuz