| |
Subscribe / Log in / New account

Interview: Jim Gettys (Part II)

The first part of our interview with Jim Gettys covered many aspects of the "One Laptop Per Child" (OLPC) system. With the second and final installment, we look at a number of other issues, including the software which will run on the system, security issues, and more.

LWN: Time for a few questions about the mix of software you envision running on the OLPC systems. To start with, it appears that the system will be based on a pared-down Fedora-based distribution?

To date, Red Hat are the ones who are putting serious engineering into OLPC and doing most the heavy lifting on the base system, and have assembled a first rate team, including Dave Woodhouse and Marcelo Tosatti.

Red Hat is putting together a pared-down Fedora derivative distribution. The community being what it is, I expect others will put other distributions on the machine as well, given that the OLPC system is an open platform. I'm not sure such duplication is worthwhile, but I'm resigned to it.

I *really* urge everyone to cooperate very strongly at the level of the software for the kids, no matter what the underlying distribution in the long term. This project is fundamentally about kids learning and helping the world; not about free software.

That being said, *free software enables the kids to learn computing in a way that they cannot learn it on closed proprietary platforms*. We are therefore very much part of the free and open source software community.

Chris' team is putting together a python based environment (into which conventional applications can be embedded) aimed at young children, temporarily called "sugar" (more information can be found in postings here, here, here, and here); conventional GUI's are not good for children still learning to read. I have a 8 year old son and an 11 year old daughter, and so have seen over the last few years first hand how unsuitable conventional desktops are for young children.

The Sugar environment, by using the Avahi library (zeroconf/mdns technology), will show the presence of people on the network as a fundamental aid to building collaborative applications. Collaboration is fundamental to learning: most kids learn from their peer group, and teachers serve as the guides.

Will the systems as shipped be 100% free software?

OLPC itself only plans to ship free software at this time.

If not, what other code do you think you might include?

Let me give a concrete example: some countries have coded some educational content in Flash.

We are strongly recommending that future content not be created in closed format like Flash, whose format is closed, lack tools for manipulation, and present major problems for accessibility tools.

But some countries have such content today, and need to use it immediately. And since there is so much flash content on the web, it would not surprise me if countries arrange for Flash to be installed, even if they do not have existing educational content in Flash. We are encouraging everyone to use open standards based formats, and to release useful content under appropriate free creative commons and free software licenses.

Reality being what it is, even if we had veto power over software on the machine (which we certainly don't), we'll see such software included on the machine by the time it in children and teacher's hands; just not distributed from OLPC, but added on afterwords.

Is it true that there will be no package manager on this system?

This is an area where we continue to have discussions. Whether it is like conventional packaging systems or not is unclear right now. Time will tell.

If so, will there be any provision for customizing the system software mix, installing localization data, or updating software?

Of course. There has to be. By middle school, kids are taking different courses and languages. Some kids have special needs, either advanced or the opposite. One of the major advantages that an OLPC has over conventional books, is the possibility to bring much more content appropriate for each child to that child, who live in places which lack libraries we find in U.S. and other developed nation's schools.

A system like this clearly needs a good set of education applications - an area where free software has not traditionally been strong. What sort of applications are you looking at in this area, and might any missing pieces come from?

There is much software sold claiming to be "educational" software, which isn't. The reality is that there aren't all that many good educational applications on *any* platform. The pickings are pretty slim.

We believe all children learn by doing, and should be authoring content, not only passively reading unchangeable engraved in stone content. We are placing some major bets on wiki technology as a base for this. (Not wiki markup though!).

I'd also like to draw your attention to a web site: logowiki.net. Content doesn't have to be static at all, and content can be programs, even programs for young children. The ability to run simulations and manipulate the starting conditions is a major tool to learning.

And one more hardware feature is unique to our machine: you can choose to use the audio input as a direct analog input, allowing direct measurements be made with very cheap sensors (e.g. photodiodes, accelerometers, etc.).

With the likes of Seymour Papert and Alan Kay involved, I think we're in for some fun stuff. For example I saw a demo this week of a wonderful music application called TamTam, that Jean Piche' et al in Canada and Ireland are building using the Csound synthesis software Barry Vercoe originally developed.

Will the systems include a compiler, or, failing that, an interpreter for a language like Python?

Of course.

At a minimum, we expect Logo, javascript and python to be present, and compilers as well when needed by interested kids. Learning programming, though, is best done in interpreted language environment, rather than compiled languages. Certainly C++ should never be a child's first computer language.

In general, is hacking one of the uses to which you think these machines will be put?

Hacking, in the original positive sense of the word (see the Jargon File), we certainly believe should be strongly encouraged in interested children: computing is a fundamental skill in today's society. For others, computers will just be a tool; their pen and paper.

Do you expect that the kids will have root access on their systems?

Yes: we want children to be able to learn computing, if they are interested. For the kid's systems, we want them "easy to fix", rather than "hard to break". For the school's servers, a shared resource, we want them to be "hard to break" *and* "easy to fix", and are exploring technologies like those developed by Planetlab.

Being root on your own personal machine is fundamentally different than having any access to information on the network you should not have. Project Athena, at MIT (where such technologies as Kerberos, X11, the first network IM system, among others), demonstrated this even 20 years ago: on those systems having root access does not get you access to anything but the services you had access to as an individual user. The root password on those systems has been posted for years: it just doesn't matter, if you do your homework properly.

The plan to use LinuxBIOS is interesting. Are there reasons driving that choice (beyond cost)?

Cost isn't a real factor in this decision. The royalty rate of a conventional BIOS is in fact very low.

Capability:

  • we'd like to be able to boot over the mesh network for (re)install
  • we may need to follow Mark Foster's fast suspend/resume path, in which case having full source may be essential to its success.
  • We want interested kids to be able to see how computers really work and learn accordingly.

Some years ago when I was working on Linux on the iPAQ, we had a 12 year old who was hacking on our boot loader, and learning tremendously as a result. Those outstanding kids should also have the opportunity to learn computing deeply.

Is LinuxBIOS ready for this sort of deployment?

We think so. It's already deployed commercially at pretty high scale on a number of products. And the very fast suspend/resume we may need to implement requires complete freedom of action at all levels of the machine.

Millions of identical systems, mostly lacking professional administration, would seem like a magnet for malware authors. What sort of thought is being put into preventing these systems from becoming worm carriers and large-scale zombie networks?

I doubt they will be exactly identical (other than hardware): language is but one of the obvious differences; children take different courses, and study different languages.

And if they need professional administration, we've failed.

Our view is that systems cannot require professional administration at a local level: we could not deploy quickly on this scale and have sufficient expertise if this were required. Part of the IPv6 argument is exactly to allow administration to scale and to simplify administration.

Eugene Kaspersky, who has been predicting Linux doom for years, is now saying that the OLPC will result in a new wave of malware from the developing world. Do you find this outcome plausible? Why or why not?

I don't find Mr. Kaspersky's arguments plausible, for a number of reasons.

We certainly are aware that security is a challenge: young children are not noted for choosing and keeping secure good passwords, and we are looking at other methods as a result.

We expect to deploy SELinux protecting the "standard" network services on our machines to help protect against day-0 attacks, to prevent bad guys from successfully attacking our systems and prevent such people from using our systems as a point of attack.

And keeping our systems up to date automatically is obviously essential.

As far as malware from the developing world: malware for what? Malware is very rare on Linux or Apple's OS/X systems: both systems break out of the starting gate much less insecure than Windows, and writing malware for either is inherently more difficult. And if Mr. Kaspersky's worried about kids in the developing world writing malware on the OLPC systems to attack Windows, how are the kids going to test such Windows malware since our machines are running Linux?

Malware authors working for profit (e.g. stealing passwords and accounts) are certainly going to be older than our kids, and will find a standard Windows system a much more productive development environment, and internet cafe's a much more anonymous place to launch attacks than our school environments.

Lastly, both at the school servers, and the networks supporting them, we have good places to prevent, stop, and track down any such attacks, much better than you'd find in the anonymous world of Internet Cafe's where anyone can pay for anonymous usage.

And high bandwidth back-haul from schools is unlikely to be very common, limiting the problem if it does occur. There are much better targets for zombies: e.g. systems all over the developed world where each machine has a high bandwidth broadband connection, rather than a kid's machine on a large shared mesh network back connected through a similar single connection. Per compromised machine, there may be a difference of a hundred to one of useful bandwidth.

Seems to me that Mr. Kaspersky knows not what he writes of, and is trying to gain eyeballs on his stories by sensationalism.

Given the state of the art, the chances of a security vulnerability turning up in the shipped OLPC systems must be near 100%. What happens then? How will OLPC users obtain and install security fixes?

Our challenge is not just the kid's machines and operating system, but also deployment of server machines in all the schools, to cache distribute software and educational content to all of them.

We expect the kids machines to be updated from school servers, and possibly from other kid's systems.

The closest management tools to what we need appears to be many of the technologies developed by PlanetLab: the commercial distributed content systems are unlikely to work, presuming as they do that systems are in data centers and always/usually available over high speed networks.

At our scale, (and with the highly variable Internet connections we expect), a presumption of constant connectivity seems untenable. We'll know more as we look into this aspect of the project more fully over the next few months.

Some commenters on LWN have expressed concerns that many of these systems may be stolen from the children and used for (or sold to fund) rather less wholesome ends. Is this an issue which the OLPC team has thought about?

Yes, we've thought about it quite a bit.

How could this risk be minimized?

First, we intend that the systems be instantly recognizable as kid's systems, not only so that kids like them and value them more and take care of them carefully, but also so that adults with machines in their possession may be asked questions about whether they should have the machine. And these systems are physically sized for smaller children, our primary "customers"; while adults can use them, it is less desirable than a bigger machine might be.

Second, public education about these distinctive systems is a topic we've discussed with deployment countries as a deterrent.

Third, by saturating each area during deployment, rather than distributing machines piecemeal, we can expect much better mesh networking performance, but also less child from child theft.

Fourth, there will be a commercial version of the machine (that will look quite different) at some point in the project, to reduce the pressure for these unique systems. As I explained before, there are quite a few ways in which these machines are unique, so we'd like there to be fewer reasons for theft, by enabling a commercial version. These commercial machines may also help cross subsidize the children's machine, for as long as the market might bear such a price differential.

Fifth, by its nature, there is a network MAC address in each machine that can aid its tracing, in the case of theft, once a system is recovered. We are, however, very concerned about the child's privacy and safety, and so don't want the systems to go around broadcasting the hardware MAC address in the normal case.

And we're exploring some other possible identity systems as well that may help in this area.

A huge "thank you" is due to Jim, who clearly took a great deal of time to respond to LWN's questions in such detail.


Machine ID

Posted Jul 5, 2006 1:19 UTC (Wed) by ccyoung (guest, #16340) [Link] (2 responses)

While I am 200% behind the OLPC, what has bothered me the most is how the machines will preserve the anonymity of the user on one hand while being part of an ongoing, upgradable mesh on the other.

If, for example, censorial countries could trace comments back to the author, these kids (and their families) could be thrown into the teeth of the dragon. I remember how much we fought the idea of CPU ID and every Internet message stamped identifying the computer. This is much more frightening because of the innocence involved.

Machine ID

Posted Jul 5, 2006 3:43 UTC (Wed) by mdomsch (guest, #5920) [Link] (1 responses)

See RFC3041

Machine ID

Posted Jul 5, 2006 4:35 UTC (Wed) by xoddam (guest, #2322) [Link]

It's good to be thinking about this issue, but the likely scenario is
that the same draconian government that might be interested in tracing
communications is also going to be the first purchaser of the hardware
and have complete control over the software installed on each device, as
well as over all intermediate networks.

If the Chinese government (as a purely hypothetical example) wants to be
able to monitor and trace the communications of machines it buys for
children in its schools, there will be little or nothing the project
designers can do to prevent it.

Let them eat Cake

Posted Jul 5, 2006 3:14 UTC (Wed) by botsie (guest, #1485) [Link] (5 responses)

Having lived all my life in third world countries, I can't help but think that the OLPC project is doing a Marie-Antoinette and saying, "let them eat cake." to kids asking for bread.

Frankly, I see this as a diversion of energy and funds from more fundamental issues: potable drinking water, disease, child labour and more.

On the other hand, the urbanised middle class of the third world probably would benefit from this -- however in that case a less extreme environment can be assumed: things like regular power supplies for example.

IMHO, a better understanding of ground realities in the third world is needed by the OLPC team.

Why make them queue for bread?

Posted Jul 5, 2006 4:14 UTC (Wed) by xoddam (guest, #2322) [Link]

'The poor you will always have with you'. Crisis relief is one thing;
development is another.

When kids who could be in school are on the streets begging at the
windows of taxis because it earns money for sweets, how do you persuade
them to invest in their own futures? Hint: it's not a bread queue.

OLPC laptops will not compete with bread, drinking water and healthcare
for funds, but with textbooks in established school systems. Nor are
they intended for people who are presently starving. This is not
charity, this is investment. The way to reduce poverty in general is to
invest in the poor and in their neighbourhoods. Education is investment.
The OLPC project is an *enabling* one, not a handout.

Child labour is a fractious issue indeed. Many poor families are
dependent on the income of their working children, some of which (such as
carpet-weaving on domestic looms or agricultural work alongside their
parents) is traditional and reinforces family life. Not all child labour
is in garbage tips and airless sweatshops. The way to lift these
families from poverty and the children from the need to work is not to
deprive kids of their incomes but to police their working relationships
and conditions so that they are neither chained to nor crippled by their
paid work, and most importantly to show them that there are better
futures.

Let them eat Cake

Posted Jul 5, 2006 7:14 UTC (Wed) by error27 (subscriber, #8346) [Link]

Obviously that's the first question that people raise... All the ideas you mentioned are important. But just because somebody contributes to OLPC doesn't mean they couldn't contribute to those as well.

The interesting thing about OLPC is that it's both a compassionate thing to do and also a fun project. OLPC is a very ambitious project and it's experimental and new. This means it's more likely to fail, but I still think it's worth the risk.

As far as power goes, my experience is that there are a lot of places with food and water but no electricity. Even in towns the electicity is flakey and tends to destroy computers. In a school setting, the desks do not have power connected.

Let them eat Cake

Posted Jul 5, 2006 17:12 UTC (Wed) by yodermk (subscriber, #3803) [Link]

Well the third world is a big and diverse place. Certainly there are places where the necesities must take absolute priority. And some places, most people have what they need, just not a lot for "extras." I suspect it's that group that the project targets.

I currently live in Ecuador, which could easily be a first world country if it weren't for government mismanagement. There are amazing natural resources here. There are some real needs among some parts of the population, and the organization I work with does, among many other things, help rural communities with potable drinking water. (Amusingly, I was at a dedication of such a project once, where the community threw a big feast for lunch, with what to drink? .... Pepsi!) There are sad cases here where parents force their small children to sell in the street, and beat them if they don't sell enough. And families that have to live near the garbage dump (they were recently required to move their homes out of it).

How this project could help the least fortunate is hard to see. But most in Ecuador have the basics, and if done correctly, I could see it helping.

Another big question might be -- are computers really beneficial to schoolkids, at least before high school? Will it make it easy or tempting to not learn such things as long division, or even decent handwriting?

Maybe the problem is you.

Posted Jul 6, 2006 21:27 UTC (Thu) by smoogen (subscriber, #97) [Link]

I think that most of the people on the OLPC team have a pretty good idea about the so called third world.

The first idea is that these places are NOT all like some commercials with everyone living in garbage shipped from the USA, or that they are being ravaged by dysentary or the hulligans next door. Yes there are parts of the world where war, clean water, disease, etc are the primary concerns for people to survive. The OLPC would not be a good candidate for them. It is a good candidate for relatively stable places where people are trying to make the lives of their children better but couldn't afford the USD 200 pricetag for the lowest end computers.

Let them eat Cake

Posted Jul 17, 2006 16:20 UTC (Mon) by forthy (guest, #1525) [Link]

This seems to be a common picture of the developing world: starving kids with flies around them. Yes, these starving kids with flies around them do exist, and no, they don't make up the majority of children in the developing world. And the main problems they have are caused by wars. So this is not the place where these OLPC laptops will go to.

The majority of the developing world has sort-of-clean water (you need to cook it before you drink it), just enough to eat (usually more healthy than McDonalds - too much to eat is worse than just enough to eat ;-), and so on. What's lacking there is education. And that's where the OLPC program tries to improve the situation.

Package Manager

Posted Jul 5, 2006 11:04 UTC (Wed) by warmcat1 (guest, #31975) [Link] (10 responses)

I sure hope they end up with RPM and yum Fedora style. Because even the Maemo/770 "single image" upgrade stuff has a package manager in it to allow third-party apps to integrate sensibly.

Once there have been a few versions of the core packages out there for a while it won't be a given that a binary package will work on a particular platform, it's important to manage that with well-tested and capable packaging techniques. Yum makes a lot of that go away before it even causes trouble by reaching out for a current version of whatever is being installed in the first place rather than leaving the user to go get ahold of a single random RPM from a website somewhere and then run around trying to find deps and hopelessly run up against the fact the binary RPM was build againt core libs that are too old to be on his box.

I guess the concern is the overhead of running the RPM database, it is quite possible to make an "RPM lite" which does not maintain a database, but instead stores installed package *headers* (only) down /var/lib/rpm. RPM can be made to walk the headers in place of having a database. I implemented this technique with busybox RPM for a product here and although it doesn't scale well to hundreds of packages it works great even on a slow ARM for dozens.

Package Manager

Posted Jul 5, 2006 15:25 UTC (Wed) by brugolsky (subscriber, #28) [Link] (3 responses)

IMHO, RPM (and dpkg) have outlived their usefulness. Max Spevack, chairman of the Fedora Board, said in this interview that he is interested in shifting the Fedora Project from RPM to a distributed version control system -- i.e., Conary, or something like it. Conary offers all of the benefits of distributed version control, including simple incremental updates via compressed deltas, separation and unification of policy (pre/post-install scripts and triggers) from the software artefacts themselves, as well as a packaging model that utilizes inheritance to relentlessly factor the software build process. It will be a lot of work to convert RPM spec files to Conary recipes, but worth the effort.

Package Manager

Posted Jul 5, 2006 20:24 UTC (Wed) by warmcat1 (guest, #31975) [Link]

Thanks for the information... there's a link in the interview to this overview of Conary:

http://www.linux.com/article.pl?sid=06/03/30/210216

I don't know that the existing packaging technologies have generally "outlived their usefulness", because for my little set of usage cases (including embedded) RPM is really nice. On a RHEL sort of view I can see that some of the Conary things would be interesting, and in turn that makes it interesting for Fedora, so maybe it will come to pass.

For the OLPC, if the sticking point with having a packaging system at all is the storage footprint, it seems from the descriptions of Conary that problem is getting bigger not smaller in that direction. So it continues to seem to me RPM or some RPM-lite would be a good thing there.

Package Manager

Posted Jul 5, 2006 20:54 UTC (Wed) by rahulsundaram (subscriber, #21946) [Link] (1 responses)

Looks like Max Spevack did not say that.

https://www.redhat.com/archives/fedora-marketing-list/200...

Package Manager

Posted Jul 5, 2006 22:45 UTC (Wed) by brugolsky (subscriber, #28) [Link]

Well that's unfortunate; I had smiled broadly when I read the Newsforge story. Thanks for setting the facts straight. I wish I had the time to go through Core/Extras/RPMforge, etc., and figure out how much effort is really required to switch packaging systems.

Please, not YUM

Posted Jul 8, 2006 11:31 UTC (Sat) by fergal (guest, #602) [Link] (5 responses)

I got so fed up with YUM's slowness and a couple of outstanding bugs that I tried Ubuntu and I'll never go back. apt-get beats the crap out YUM in every way.

I've yet to form a strong opinion on .deb versus .rpm.

Please, Yum Yum!

Posted Jul 9, 2006 7:43 UTC (Sun) by warmcat1 (guest, #31975) [Link] (4 responses)

Well we all have to use the opinions we arrived at from our experiences. However I have used apt-get on Debian, and used the ported RPM-aware apt-get, and in my experience yum is superior. Apt-get was and is a great boost out of package hell, wandering around in the desert looking for deps by hand, but yum does that job and more. For example, try "yum search" and "yum whatprovides" for an entire metarepo view of what rpm can only provide on installed packages.

On the slowness yum has a system at the moment where it wanders around randomly between its list of mirrors, guaranteeing suboptimal performance for everyone. Recent yum has an sqllite backend and is not in itself slow, but the default mirror behaviours make it seem slow simply because it randomly tries to grab packages from some overloaded server halfway around the world. You can edit /etc/yum.repos.d/* to get yum to use a single local mirror instead of the list and things are very fast on a decent box.

"Your mileage did vary" ;-) but there is my experience.

Please, Yum Yum!

Posted Jul 9, 2006 17:22 UTC (Sun) by fergal (guest, #602) [Link] (3 responses)

apt has all that meta repo goodness too, search, provides etc (apt-cache is the tool for that, not apt-get which might be why you thought it didn't).

As for speed, I did modify my repos and I hit the same mirror site for both fedora and ubuntu. The major difference is that yum hits the repo more often than apt. It also spends an inordinate amount of time loading the repo data (possibly fixed by sqllite).

When I install something with apt-get, all it does is download that package and it's deps, it doesn't do all the other stuff that yum seems to and most importantly it doesn't barf just because 1 repository is temporarily down! The number of times I've had to comment and uncomment Dag Weier's repo is just silly.

There's also the ridiculous behaviour on ctrl-c (switch to next repository, not quit, no that requires ctrl-\ or fishing around for a PID to kill). This is a feature say the developers.

Yum is probably on it's way to being as good as apt-* but I just don't see the point in suffering while it gets there and by the time it arrives, apt will have moved a bit further.

I'd be interested to hear an specific example where you think yum is superior.

Please, Yum Yum!

Posted Jul 9, 2006 17:40 UTC (Sun) by warmcat1 (guest, #31975) [Link] (2 responses)

> apt-cache is the tool for that, not apt-get which
> might be why you thought it didn't

Yes I never saw these capabilities anywhere before yum.

> yum hits the repo more often than apt

The extent of my knowledge about how it works is that it uses HTTP byte ranges to get RPM package headers that contain everything about the RPM except the cpio payload. If it identifies that a package needs to be downloaded, for example, it will go download the header part of that package and then look in there for dependencies. That looks casually like a lot of back and forth, but it allows cool features like arbitrary mixing and matching with localinstall (install this random RPM I already have, getting deps remotely from anywhere you can) local RPMs, multiple repos and so on. I got the impression apt worked on a more precooked-database-file-from-the-repo way, but I don't actually know it.

> ^C

Yes everyone finds that annoying I am sure

> I'd be interested to hear an specific example
> where you think yum is superior.

Yum does multiarch/multilib (eg, x86_64 mixing/duping i386 libs) in a good way, I believe apt is unable to do this. Although if my belief is wrong, since it came from a discussion on Fedora-list wrt the RPM port of apt, please do disabuse me of it.

Please, Yum Yum!

Posted Jul 9, 2006 17:53 UTC (Sun) by fergal (guest, #602) [Link] (1 responses)

> allows cool features like arbitrary mixing and matching with localinstall
> (install this random RPM I already have, getting deps remotely from
> anywhere you can) local RPMs, multiple repos

apt does this just fine.

> I got the impression apt worked on a more
> precooked-database-file-from-the-repo way, but I don't actually know it.

I'm not so sure, the effect is the same anyway, all the useful repo data is vailable locally. Adding a new package does not cause apt to refetch the entire database for tha repo. I'm not so sure about yum.

> Yum does multiarch/multilib (eg, x86_64 mixing/duping i386 libs) in a
> good way, I believe apt is unable to do this. Although if my belief is
> wrong, since it came from a discussion on Fedora-list wrt the RPM port of
> apt, please do disabuse me of it.

I don't know for sure but neither yum nor apt should know anything about 32 vs 64 bit. There might be some support in rpm or dpkg for a difference but it seems more likely that this is a debian vs fedora filesystem layout difference.

Please, Yum Yum!

Posted Jul 9, 2006 18:08 UTC (Sun) by warmcat1 (guest, #31975) [Link]

> apt does this just fine.

Oh well, more props to apt then.

> Adding a new package does not cause apt to
> refetch the entire database for tha repo.
> I'm not so sure about yum.

I believe the database part of the repo is lighter in the Yum way of doing things, since it goes to the actual packages to get the full metadata.

> There might be some support in rpm or dpkg for a
> difference but it seems more likely that this is
> a debian vs fedora filesystem layout difference.

Yes, the RPM libs know about multiarch and deal with it, but yum has to be aware of what is going on. For example:

# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{ARCH}\n" | grep glibc
glibc-2.3.4-i686
glibc-common-2.3.4-x86_64
glibc-2.3.4-x86_64
glibc-kernheaders-2.4-x86_64
glibc-devel-2.3.4-x86_64
glibc-headers-2.3.4-x86_64

There are two packages called "glibc" of the same version installed in the one box, for example... yum or whatever has to be aware that only the one matching the arch of a new package is a resolution for it, that the -devel package only matches the package with the same arch, etc. Put another way you can have the x86_64 set of libs needed by a package, but if that package is coming in as i686 it is NOT a resolution. Thinking about it the complaints I heard were probably specific to the RPM port of apt and may not say anything about apt's native multiarch powers.

Thanks for asking about security

Posted Jul 7, 2006 2:09 UTC (Fri) by kingdon (guest, #4526) [Link] (1 responses)

When they talk about mesh networking and zeroconf and some of the other "networked out of the box" programs and the limited access to the internet to get software updates, I can't help but get nervous about security.

I expect OLPC malware would spread peer to peer, a la the pre-Internet Windows viruses.

Hopefully he is right, and malware just won't be a big problem (principles like "easy to fix" might also help). I'm glad this interview at least covers the topic, and the fact that they have at least thought of things like SELinux looks promising.

Thanks for asking about security

Posted Jul 8, 2006 11:34 UTC (Sat) by fergal (guest, #602) [Link]

How's that different to the spread of malware on Windows currently? I don't see how mesh networking versus big-ISP networking makes this any worse.

It's likely that most of the boxes will be running firewalled and with little or no services turned on.

Learn from the Apple II

Posted Jul 7, 2006 2:52 UTC (Fri) by nas (subscriber, #17) [Link] (1 responses)

I recently listened to an interview Steve Wozniak (very interesting, BTW). Afterwards, I found an online copy of the original Apple II manual, also known as the Red Book. As a child, I never had the pleasure of owning an Apple II. In retrospect, I wish I had. The manual is filled with all kinds of interesting details about the hardware. Also, the firmware of the machine looks like it would make low-level tinkering easy; for example, it has a built-in disassembler. The OLPC project would do well to study the Apple II.

Learn from the Apple II

Posted Jul 9, 2006 7:22 UTC (Sun) by sbergman27 (guest, #10767) [Link]

Indeed. I did have an Apple II+ in high school. Does your book have the hardware logic schematic? It was included in one of the manuals.

To beep the speaker it was a "peek -16252" or some such. Yep. Built in disassembler. 278 x 192 resolution 8 color graphics. (Sort of.) I splurged and got the full 48k of RAM. Later, I could upgrade it to 64k with an add on card, but the last 2k had to be bank switched. Another add on card gave it an 80 column console screen. Never had a hard drive though. Just dual 5.25 single sided floppies. Originally they were good for 112.5k. But a later version of Apple DOS increased it to something a little higher.

5 years later I got an IBM XT compatible. I was stunned out how much DOS insulated me from the hardware. It was a completely different experience. I eventually sold the Apple to a friend, long after it had become obsolete. (386's were out by then.) I thought it had outlived its usefulness.

He got a second phone line, put in an rs232 card and external modem and ran a nice, thriving little BBS on it for years.

Now, I kinda wish I'd kept it.


Copyright © 2006, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds