I'm curious that with all the software development practices being flung around these days whether nay though has been given to how to scale up software development.,
I've often heard it said that small teams are better than large teams but what happens when you have a large application or perhaps a large suite of applications that are all supposed to work together. How do you scale up development in such a way that you don't end up programming resources.
Fred Brooks says that a large programming teams won't as well as a small programming team. The thing is, in some sense, every day programmer work on huge projects even if all they do is write a small, 100 line python program. That python script will use python (a large project) which relies on an operating system (a large project) made up of large components etc... We are always leveraging someone else's code on someone else's project.
writing library of framework code is much harder than writing application code. The trick is how much more effort does it take? ..especially if the library is only being used in house? How big should development teams get before the project is split into two groups or three? If we say that teams should be of about 5 people, how many teams should we have before we should create a team responsible for tracking down and eliminating duplicate code - by turning it into an inter-team library?
I haven't seem anything that attempts to tackle this.
There's another question.
Java's API and python's API both have really nice documentation. I would expect that any teams that try an work together would need at least this amount of documentation. Do they need anything else? I mean python's API and Java's documentation are all I use.
If joint design is being done, that is an interface is being negotiated, which is the best way of doing this? Sure, if you on have 2 teams negotiating a handful of interfaces this probably isn't a problem but what if you have 3? 4? 5? 10? 50? If a project is made up of many teams is it a good idea to have a meta-team responsible for maintaining the large project's design integrity?
I don't know. I haven't seen anything written about this. It's important, though. Many software projects are huge and right now most best practices are optimized for smaller teams and small software projects. Imagine trying to to native agile program with 50 teams of 5. You're project would be a mess.
..now what?
Sunday, December 30, 2007
Friday, December 21, 2007
Sleeeppppp zzzz
I've been reading a book about people who started up companies. These guys are crazy. The hours they work are insane.
There's a story about someone who would sleep for 4 hours every 2 days. Another guy would work for 4 days solid until he just fell asleep. That's pretty gosh darn bonkers.
I am unusual in that I appear to need a couple more hours of sleep than the average person. I usually like between 9 and 10 hours of sleep. Usually it's 10 hours and it can be more if I'm learning something (taking a course, going to a conference for example).
The "average" amount of sleep per night is 8 hours. There are some people who like more some people that like less. Apparently the typical range is between 6 and 10 hours.
I'm not sure how that works, I start to go a little crazy if I have only 8 hours sleep.
I've heard that attention deficit disorder might have a sleep deprivation angle to it. Given that I doubt many people are getting 8 hours of sleep, many people, if not just about everyone, may be walking around having slept too little. That's not good.
Humm.. better not become a member of that group. I'm going to sleep now.
There's a story about someone who would sleep for 4 hours every 2 days. Another guy would work for 4 days solid until he just fell asleep. That's pretty gosh darn bonkers.
I am unusual in that I appear to need a couple more hours of sleep than the average person. I usually like between 9 and 10 hours of sleep. Usually it's 10 hours and it can be more if I'm learning something (taking a course, going to a conference for example).
The "average" amount of sleep per night is 8 hours. There are some people who like more some people that like less. Apparently the typical range is between 6 and 10 hours.
I'm not sure how that works, I start to go a little crazy if I have only 8 hours sleep.
I've heard that attention deficit disorder might have a sleep deprivation angle to it. Given that I doubt many people are getting 8 hours of sleep, many people, if not just about everyone, may be walking around having slept too little. That's not good.
Humm.. better not become a member of that group. I'm going to sleep now.
Thursday, December 20, 2007
Programmer's theater group
I belong to a amateur theater group. Every year we put on two large shows. These shows take month of preparation. Not only do the actors need to learn their lines and go on stage but there's a huge amount of behind the scenes work to do as well.
There's things like:
- Choosing the script
- Directing
- Renting the hall for the play
- Renting the space for the rehearsals
- Ticket selling
- Costumes
- Designing and building the set
- Props
- The staff running front-of-house during the productions
- Advertising
- Ticket selling
All in all even with a cast of about 15 people, we end up using more than 30. All of this is done on a volunteer basis. All done in people's spare time.
It takes allot of time. As an actor, I spend 5 hours a week for the first two months at group rehearsals then 9 hours for the last month culminating in about 20 hours during the week of the production. Generally, people also tend to spend hours on their own learning their lines as well.
The more I think about it the more I'm amazed at how all this works. I'm curious if anyone is trying to do this sort of thing with software project.
The idea would be to get about 5 or so programmers who are interested doing a project and aren't particularly picky about what the project is. Then brainstorming on ideas until there's one that stands out then coding it, setting people up as project leads, designers, etc... As well as grabbing others for things like building the website and doing all the work with registering the finished projects with websites and sending out press releases.
With 5 people spending about 5 hours a week or more on the project I don't see why we can't have something interesting in 6 months or so.
The idea would be to set it up as an agile style process. Meet once a week for a SCRUM type status. Have a "director" or lead to decide the high level direction and focus of the project. .. and guarantee some hours of availability for code paring.
The goal would be to set some fixed time span (this is important) and a goal and try to ship a workable solution to the goal, preferably as an OS project. The timespan would probably be about 5 to 6 months in total with about 4 months of coding/design time.. the other 2 months would be just deciding which problem to tackle by way of looking at problems/possible programs/possible new features that can be written in such a short time. The idea here being that if you're going to spend the next few months tackling a problem you might as well think hard about which problem to tackle.
Given what I've seen with the theater group, some hours of inter-team interaction together would be needed for social reasons and as a good motivator. This is why I would say that it's important for team members to set aside some time at which all team members would be working on the codebase at the same time. If everyone has a laptop it could all be at the same location too. It's always great coding in an environment where you can bounce ideas off each other... nt to mention things like peer review too.
The project would proceed (Look Phil! Two "e"s!) in phases:
- Meet once a week for a while and each week present a possible project as a problem or need to fulfill and a goal for the project. After some time recap all the projects and vote on which one it the best. The project must have a team lead/"director" and must have enough programmers who want to work on it. New features to existing projects are allowed.
- Build a high level design and mock up for how the program should be built. This is mostly up to the director to organize. They should pick someone to help design the project. During this phase, any part of the project that may not be feasible should be investigated up until the point where everyone is convinced it will work. Brain storming sessions should be held twice a week on invitation of the project lead.
- Director choose who should work on which section of the problem and works with those people on explaining what the behavior should be. Teams are made. High level design of these module is done and code is started. Two sessions per week ~ 2.5 hours a week.
- After two months or so we start to attempt to going all the pieces together and run the project looking for bugs and behaviors that are not desirable. Schedule adds a 4 hour session to the existing 2.5 hour sessions.
- During the last week, the schedule accelerates to bug fixing every day of the week and work is wrapped up. Over the week-end the program is compile and uploaded or if the project is a website it goes live and links submitted to search engines etc..
So currently I'm curious to try this out. I've currently got two potential prospects. We've been brain storming potential ideas. I'm curious to know if we can get something out of it.
Anyhow.. food for thought. You want to audition for the next project? You know how to reach me. :-)
There's things like:
- Choosing the script
- Directing
- Renting the hall for the play
- Renting the space for the rehearsals
- Ticket selling
- Costumes
- Designing and building the set
- Props
- The staff running front-of-house during the productions
- Advertising
- Ticket selling
All in all even with a cast of about 15 people, we end up using more than 30. All of this is done on a volunteer basis. All done in people's spare time.
It takes allot of time. As an actor, I spend 5 hours a week for the first two months at group rehearsals then 9 hours for the last month culminating in about 20 hours during the week of the production. Generally, people also tend to spend hours on their own learning their lines as well.
The more I think about it the more I'm amazed at how all this works. I'm curious if anyone is trying to do this sort of thing with software project.
The idea would be to get about 5 or so programmers who are interested doing a project and aren't particularly picky about what the project is. Then brainstorming on ideas until there's one that stands out then coding it, setting people up as project leads, designers, etc... As well as grabbing others for things like building the website and doing all the work with registering the finished projects with websites and sending out press releases.
With 5 people spending about 5 hours a week or more on the project I don't see why we can't have something interesting in 6 months or so.
The idea would be to set it up as an agile style process. Meet once a week for a SCRUM type status. Have a "director" or lead to decide the high level direction and focus of the project. .. and guarantee some hours of availability for code paring.
The goal would be to set some fixed time span (this is important) and a goal and try to ship a workable solution to the goal, preferably as an OS project. The timespan would probably be about 5 to 6 months in total with about 4 months of coding/design time.. the other 2 months would be just deciding which problem to tackle by way of looking at problems/possible programs/possible new features that can be written in such a short time. The idea here being that if you're going to spend the next few months tackling a problem you might as well think hard about which problem to tackle.
Given what I've seen with the theater group, some hours of inter-team interaction together would be needed for social reasons and as a good motivator. This is why I would say that it's important for team members to set aside some time at which all team members would be working on the codebase at the same time. If everyone has a laptop it could all be at the same location too. It's always great coding in an environment where you can bounce ideas off each other... nt to mention things like peer review too.
The project would proceed (Look Phil! Two "e"s!) in phases:
- Meet once a week for a while and each week present a possible project as a problem or need to fulfill and a goal for the project. After some time recap all the projects and vote on which one it the best. The project must have a team lead/"director" and must have enough programmers who want to work on it. New features to existing projects are allowed.
- Build a high level design and mock up for how the program should be built. This is mostly up to the director to organize. They should pick someone to help design the project. During this phase, any part of the project that may not be feasible should be investigated up until the point where everyone is convinced it will work. Brain storming sessions should be held twice a week on invitation of the project lead.
- Director choose who should work on which section of the problem and works with those people on explaining what the behavior should be. Teams are made. High level design of these module is done and code is started. Two sessions per week ~ 2.5 hours a week.
- After two months or so we start to attempt to going all the pieces together and run the project looking for bugs and behaviors that are not desirable. Schedule adds a 4 hour session to the existing 2.5 hour sessions.
- During the last week, the schedule accelerates to bug fixing every day of the week and work is wrapped up. Over the week-end the program is compile and uploaded or if the project is a website it goes live and links submitted to search engines etc..
So currently I'm curious to try this out. I've currently got two potential prospects. We've been brain storming potential ideas. I'm curious to know if we can get something out of it.
Anyhow.. food for thought. You want to audition for the next project? You know how to reach me. :-)
Wednesday, December 19, 2007
The dangers of reducing coupling.
This is a continuation of part 1
Once programmers discover the joys of sectioning off code and reduced coupling there's a tendency to go a bit too far. (This is all from a java programmer perspective)
One of the common anti-patterns people fall into is to try and manage their dependencies by tracking which classes/packages know about which other classes/packages. The idea is that a class should only know about certain other classes. As a result it should be possible to group related classes into a package and then use a tool to automatically generate a map of dependencies between packages. This map can then be used to figure out where classes are making dependencies and destroy stupid ones.
There's a few problems with this:
1) It gives a false sense of security because two classes can depend on one another without having a compile time dependencies. The most common problem I've seen is some sort of complex event-driven system that ends up getting tied in knots because the code was written using this event driven system partly as an attempt to compile time dependencies with the goal of retaining the positive aspects avoiding compile time dependencies.
2) Compile time dependencies are not run time dependencies. Even if you consciously try and avoid falling into the trap of turning your compile time dependencies into run time dependencies, you will still fail because there's often no way of expressing the run time behavior of a system with a static, compile time dependency map. In the worse case, the attempt to do so will put limits on what sorts of patterns you can use while coding in order to try and keep the static dependency graph matching with the actual real graph.
Essentially these two reasons boil down to 1) it won't work in practice and 2) even if you could make it work in practice it still wouldn't be a good idea.
Keeping your inter-package dependencies clear and clean is absolutely a good idea. It is not, however, a panacea. It can't solve world hungry and it won't bring anyone back from the dead.
The thing with managing compile time dependencies is it's works up to a point. That point is the point at which the compilers understand how your program is put together.
What you should be trying to do is manage the coupling of your code. A dependency is a hint that there's some sort of coupling. It could be high or low but the hint is there's a coupling. If the compiler doesn't show a link that doesn't mean there's no coupling, it just means there's no compile time coupling. To be more accurate it means there's not compile time dependency. A compile time dependency can be thought of a form of coupling.
Ok, so let's say we're a developer and we've seen the light and now we know that coupling is bad. Is there any other ways to screw this up? Yep. Trying to reduce coupling to zero.
Reducing coupling between components to 0 can't work. I've seen people try to do this by removing as many constraints as possible.. For example, removing compile times checks via using something like a Map or events or something. Don't do this. There are languages out there that don't have compilers. Ask programmers in those languages if they experience problems with things being too coupled.
Another favorite is to keep chopping up code way past the point of sanity. The way to replicate this at home is to take a reasonably well written program and try to make every line of it into a framework. After a few hours you'll wind up with a dense soup of spaghetti.
This happens because you can't remove coupling. If you try to remove coupling by splitting things into incredibly tiny pieces you end up with a program with more tightly coupled components and more coupling related issues and more complexity than ever.
Before I cojntinue I'd like to introduce the idea of cohesion. Cohesion represents the idea that some things in life are inheritantly* coupled together tightly. A real world example of this is a table leg. Every molecule in that table leg can be viewed as a separate entity. However, it makes sense to manipulate the table leg as a whole so we don't consider the fact it's made up of molecules which are made up of atoms. Sure it's a lie but it's a convenient lie that makes the world easier to understand.
* however the heck you spell that.
The good programmers recognize a cohesive object when they see one.
Using this approach of building sections of code with high internal cohesion and low external coupling you can build some fairly amazing things.
It works in ANY language.
Oh, you can even use the trick recursively. Build an object out of some cohesive properties then use a collection of relatively coupled object to build a meta-object and so on. As such you can build mind bindingly complex things.
Everytime you use Swing or SOAP or RSS to write an application you're building a sort of meta-object that includes libraries with high internal cohesion. TCP/IP, XML, Swing object like JTable (shudder)..
Ok, so how do we do this? Well, the easy way is to do test driven development. I'm not sure why but test driven development seems to help programmers make programs that have objects with higher internal cohesion. I have my theories:
1) It actually forces programmers to think first - to design*.
2) Writing unit tests is easier if you have objects with simple interfaces that aren't dependent on a myriad of things.
3) Writing objects with simple interfaces also means you want to clump related functionality into one object so you don't drive yourself insane writing tests for millions of little, tiny objects.
* Design is the "D" word. Don't say it at any agile software conferences or you'll spend the next half an hour explaining yourself.
I suspect that taken together these things can be responsible for the worst hyperbole I heard while at the SD2007 best practices conference. I quote (more or less):
"Test driven development is the silver bullet Fred Brooks say didn't exist."
sigh...
Low coupling, high cohesion. It's the mantra of good software.
Further reading:
http://en.wikipedia.org/wiki/Test-driven_development
http://en.wikipedia.org/wiki/Coupling_%28computer_science%29
http://en.wikipedia.org/wiki/Fred_Brooks
Once programmers discover the joys of sectioning off code and reduced coupling there's a tendency to go a bit too far. (This is all from a java programmer perspective)
One of the common anti-patterns people fall into is to try and manage their dependencies by tracking which classes/packages know about which other classes/packages. The idea is that a class should only know about certain other classes. As a result it should be possible to group related classes into a package and then use a tool to automatically generate a map of dependencies between packages. This map can then be used to figure out where classes are making dependencies and destroy stupid ones.
There's a few problems with this:
1) It gives a false sense of security because two classes can depend on one another without having a compile time dependencies. The most common problem I've seen is some sort of complex event-driven system that ends up getting tied in knots because the code was written using this event driven system partly as an attempt to compile time dependencies with the goal of retaining the positive aspects avoiding compile time dependencies.
2) Compile time dependencies are not run time dependencies. Even if you consciously try and avoid falling into the trap of turning your compile time dependencies into run time dependencies, you will still fail because there's often no way of expressing the run time behavior of a system with a static, compile time dependency map. In the worse case, the attempt to do so will put limits on what sorts of patterns you can use while coding in order to try and keep the static dependency graph matching with the actual real graph.
Essentially these two reasons boil down to 1) it won't work in practice and 2) even if you could make it work in practice it still wouldn't be a good idea.
Keeping your inter-package dependencies clear and clean is absolutely a good idea. It is not, however, a panacea. It can't solve world hungry and it won't bring anyone back from the dead.
The thing with managing compile time dependencies is it's works up to a point. That point is the point at which the compilers understand how your program is put together.
What you should be trying to do is manage the coupling of your code. A dependency is a hint that there's some sort of coupling. It could be high or low but the hint is there's a coupling. If the compiler doesn't show a link that doesn't mean there's no coupling, it just means there's no compile time coupling. To be more accurate it means there's not compile time dependency. A compile time dependency can be thought of a form of coupling.
Ok, so let's say we're a developer and we've seen the light and now we know that coupling is bad. Is there any other ways to screw this up? Yep. Trying to reduce coupling to zero.
Reducing coupling between components to 0 can't work. I've seen people try to do this by removing as many constraints as possible.. For example, removing compile times checks via using something like a Map or events or something. Don't do this. There are languages out there that don't have compilers. Ask programmers in those languages if they experience problems with things being too coupled.
Another favorite is to keep chopping up code way past the point of sanity. The way to replicate this at home is to take a reasonably well written program and try to make every line of it into a framework. After a few hours you'll wind up with a dense soup of spaghetti.
This happens because you can't remove coupling. If you try to remove coupling by splitting things into incredibly tiny pieces you end up with a program with more tightly coupled components and more coupling related issues and more complexity than ever.
Before I cojntinue I'd like to introduce the idea of cohesion. Cohesion represents the idea that some things in life are inheritantly* coupled together tightly. A real world example of this is a table leg. Every molecule in that table leg can be viewed as a separate entity. However, it makes sense to manipulate the table leg as a whole so we don't consider the fact it's made up of molecules which are made up of atoms. Sure it's a lie but it's a convenient lie that makes the world easier to understand.
* however the heck you spell that.
The good programmers recognize a cohesive object when they see one.
Using this approach of building sections of code with high internal cohesion and low external coupling you can build some fairly amazing things.
It works in ANY language.
Oh, you can even use the trick recursively. Build an object out of some cohesive properties then use a collection of relatively coupled object to build a meta-object and so on. As such you can build mind bindingly complex things.
Everytime you use Swing or SOAP or RSS to write an application you're building a sort of meta-object that includes libraries with high internal cohesion. TCP/IP, XML, Swing object like JTable (shudder)..
Ok, so how do we do this? Well, the easy way is to do test driven development. I'm not sure why but test driven development seems to help programmers make programs that have objects with higher internal cohesion. I have my theories:
1) It actually forces programmers to think first - to design*.
2) Writing unit tests is easier if you have objects with simple interfaces that aren't dependent on a myriad of things.
3) Writing objects with simple interfaces also means you want to clump related functionality into one object so you don't drive yourself insane writing tests for millions of little, tiny objects.
* Design is the "D" word. Don't say it at any agile software conferences or you'll spend the next half an hour explaining yourself.
I suspect that taken together these things can be responsible for the worst hyperbole I heard while at the SD2007 best practices conference. I quote (more or less):
"Test driven development is the silver bullet Fred Brooks say didn't exist."
sigh...
Low coupling, high cohesion. It's the mantra of good software.
Further reading:
http://en.wikipedia.org/wiki/Test-driven_development
http://en.wikipedia.org/wiki/Coupling_%28computer_science%29
http://en.wikipedia.org/wiki/Fred_Brooks
Tuesday, December 18, 2007
Why coupling is bad.
There's a large difference between a 1000 line program and a 100000 line program. Most of it has to do with making an application's architecture scalable.
Uh-oh, someone asked me to introduce a new feature into my established codebase!
When programming there's twos sorts of new features. There's the kind of feature I like to call a "vertical" new feature and there's a "horizontal" new feature.
A vertical feature is one that doesn't really interact much with other code. If you picture the code for the feature as being coded as a stack of layers that only interact with one another you end up with a vertical stack. When this stack is added to the established codebase it doesn't make it much more complicated. I mean there's the complexity in the feature itself but that complexity is nicely contained it's own stack. It's almost as if it's another program that just happens to be compiled with the established codebase. If you were to make a change somewhere in the code of the existing codebase you wouldn't need to worry about breaking the new feature because there's practically no chance that what you're doing will affect that feature. Vertical features are self contained and are therefore lightly coupled with the established code.
Horizontal features are those that affect a large cross-section of the application. The best example of a horizontal feature in InteleViewer would be key images.
InteleViewer is a application that shows medical images like CTs or magnetic imaging or X-rays etc.. The thing is, key images are images that aren't really images. They are references to images. From the Viewer's perspective, it gets a command that says download image X. The Viewer goes "OK", downloads it and then is surprised to find out that image X is actually a reference to three other images Y, Z and Q. The Viewer then has to go out and transfer them.
Now this all sounds perfectly simple. In your head you can imaging that all you'd need to do is have the KeyImage code transparently get the image it's referring to and download it. Well, yeah.. except for that fact that KeyImages came relatively late in the history of the Viewer and the Viewer was making lots of assumptions about the nature of an images to implement other existing features. Here are a few issues:
- The KeyImage might refer to an image that's already loaded and these things are huge so we don't want to load them twice. We have to add some code to make sure we're not loading the same thing twice in the loading code itself instead of in the code calling the loader.
- We have multiple different protocols which we can use to download images. Some of them have their own constraints as to what's possible to do vis-a-vis downloading images. We have to be aware of this and deal with each source individually or try and build the key images code out of abstracted operations that already exist.
- If we had any protocols that we wrote that assumed we were only sending images we need to re-write them a bit.
- Since key images, on their own, are a file, but don't contain any image data, you can't blindly send the files themselves to image manipulation routines.
- We cache all images on disk but key images don't have any image data so we need to be aware of this in the cache code too.
- key images have filtering operations that apply to the underlying images, so these filtering operations have to be combinable.
..the list goes on. key images have introduced constraints all over the code and the more constraints you have the more likely the next feature you add will need to know about key images. Key images adds constraints across the application's loading and caching systems and therefore makes any code in those systems more complex and subtle than before.
Here's a question for you? Can we abstract away the annoyances of key images by clever use of layers and factories and such?
Well, every programmer should try and some of the things I mentioned can be hidden by interfaces and abstractions. The simple fact that adding KeyImages was possible, was because we'd worked hard trying to hide the complexity of the loading system. The thing is, there's a fundamental limit to what you can do with abstractions.
Consider this: You're abstractions are shaped by what is possible to do with thing(s) you're abstracting. It's fairly easy to come up with a case that makes abstraction impossible. Here's an example:
You want create a program that downloads a movie file from an existing website and then plays it. The system must allow for the movie to start playing as it is coming in. Unfortunately, one of these movie formats puts some important information at the end of the file. It's not possible to actually play the movie until it's arrived and the server you're talking to doesn't allow you to asks for specific bytes before others. Net result: you're doomed. No abstraction can save you because it's not possible to provide an implementation that will do what you want.
This exact same thing can happen in more subtle ways with horizontal features. If you have a feature that adds on a requirement that's a contradiction of an existing requirement there's no abstraction you can do that will fix it.
The first rule here is to make horizontal features into vertical features whenever humanly possible. If you don't you're application will get old before its time. I would go further and say "no" to horizontal features or even look for little used horizontal features to remove from your app. Functionality not being used? Every piece of functionality has some horizontal component. If it's not used it should be removed.
What we're doing here is actually reducing the feature's coupling with other components. Being paranoid about coupling is a very powerful idea. When programmers discover it they jump for joy and then go onto make applications that are much larger and more complicated then ever before.
Then they run into the next wall.. More about that later.
Part 2 - When reducing coupling goes bad.
Uh-oh, someone asked me to introduce a new feature into my established codebase!
When programming there's twos sorts of new features. There's the kind of feature I like to call a "vertical" new feature and there's a "horizontal" new feature.
A vertical feature is one that doesn't really interact much with other code. If you picture the code for the feature as being coded as a stack of layers that only interact with one another you end up with a vertical stack. When this stack is added to the established codebase it doesn't make it much more complicated. I mean there's the complexity in the feature itself but that complexity is nicely contained it's own stack. It's almost as if it's another program that just happens to be compiled with the established codebase. If you were to make a change somewhere in the code of the existing codebase you wouldn't need to worry about breaking the new feature because there's practically no chance that what you're doing will affect that feature. Vertical features are self contained and are therefore lightly coupled with the established code.
Horizontal features are those that affect a large cross-section of the application. The best example of a horizontal feature in InteleViewer would be key images.
InteleViewer is a application that shows medical images like CTs or magnetic imaging or X-rays etc.. The thing is, key images are images that aren't really images. They are references to images. From the Viewer's perspective, it gets a command that says download image X. The Viewer goes "OK", downloads it and then is surprised to find out that image X is actually a reference to three other images Y, Z and Q. The Viewer then has to go out and transfer them.
Now this all sounds perfectly simple. In your head you can imaging that all you'd need to do is have the KeyImage code transparently get the image it's referring to and download it. Well, yeah.. except for that fact that KeyImages came relatively late in the history of the Viewer and the Viewer was making lots of assumptions about the nature of an images to implement other existing features. Here are a few issues:
- The KeyImage might refer to an image that's already loaded and these things are huge so we don't want to load them twice. We have to add some code to make sure we're not loading the same thing twice in the loading code itself instead of in the code calling the loader.
- We have multiple different protocols which we can use to download images. Some of them have their own constraints as to what's possible to do vis-a-vis downloading images. We have to be aware of this and deal with each source individually or try and build the key images code out of abstracted operations that already exist.
- If we had any protocols that we wrote that assumed we were only sending images we need to re-write them a bit.
- Since key images, on their own, are a file, but don't contain any image data, you can't blindly send the files themselves to image manipulation routines.
- We cache all images on disk but key images don't have any image data so we need to be aware of this in the cache code too.
- key images have filtering operations that apply to the underlying images, so these filtering operations have to be combinable.
..the list goes on. key images have introduced constraints all over the code and the more constraints you have the more likely the next feature you add will need to know about key images. Key images adds constraints across the application's loading and caching systems and therefore makes any code in those systems more complex and subtle than before.
Here's a question for you? Can we abstract away the annoyances of key images by clever use of layers and factories and such?
Well, every programmer should try and some of the things I mentioned can be hidden by interfaces and abstractions. The simple fact that adding KeyImages was possible, was because we'd worked hard trying to hide the complexity of the loading system. The thing is, there's a fundamental limit to what you can do with abstractions.
Consider this: You're abstractions are shaped by what is possible to do with thing(s) you're abstracting. It's fairly easy to come up with a case that makes abstraction impossible. Here's an example:
You want create a program that downloads a movie file from an existing website and then plays it. The system must allow for the movie to start playing as it is coming in. Unfortunately, one of these movie formats puts some important information at the end of the file. It's not possible to actually play the movie until it's arrived and the server you're talking to doesn't allow you to asks for specific bytes before others. Net result: you're doomed. No abstraction can save you because it's not possible to provide an implementation that will do what you want.
This exact same thing can happen in more subtle ways with horizontal features. If you have a feature that adds on a requirement that's a contradiction of an existing requirement there's no abstraction you can do that will fix it.
The first rule here is to make horizontal features into vertical features whenever humanly possible. If you don't you're application will get old before its time. I would go further and say "no" to horizontal features or even look for little used horizontal features to remove from your app. Functionality not being used? Every piece of functionality has some horizontal component. If it's not used it should be removed.
What we're doing here is actually reducing the feature's coupling with other components. Being paranoid about coupling is a very powerful idea. When programmers discover it they jump for joy and then go onto make applications that are much larger and more complicated then ever before.
Then they run into the next wall.. More about that later.
Part 2 - When reducing coupling goes bad.
Monday, December 17, 2007
CSS etc..
So I've updated the look of the blog recently. I'm having trouble getting it to look decent.
Basically this whole re-design was prompted by the fact that the previous template didn't grow horizontally with the size of the web browser. On my large monitors with large text it looked completely silly; there was this skinny column of text centered right in the middle of the page. sigh..
Well, what I did was I took a blogger template that actually did change its size depending on the horizontal width of the web browser window and tweak the heck out of it.
Here's the original:
Yeah.. I changed it a bit. I consider it an improvement based just on the fact that there's no freaking orange in it. :-) Orange makes me an sad panda. If I remeber correctly the tangerine iMac was the least popular so it looks like I'm not alone.
One of the things I was unhappy with is there's no way of dividing up the space between two widgets in such a way that one widget takes up a fixed amount of space in pixels and another takes up whatever is left over. Not sure why that is since I seem to remember doing something like that in the old do-everything-with-tables days.. Although I might be confusing HTML table layout with GridBagLayout.
When doing this site redesign I ran into the links-that-don't-look-like-links problem. Essentially the only way you can tell the what is a link on a web page and what isn't is by either mousing over it and noticing your cursor changes or by looking at it and seeing it's a link because it's a different color or because it's underlined. The thing is, the default blogger template don't do this consistently. Links come in at least 3 different colors. Some are underlined and some aren't. In ye olden days this wasn't a problem. Site couldn't over-ride the link color. But now they can and what's worse is CSS actually allows you do have a different linking style for every piece of text and widget on the screen. It's often happened to me that I'll be mousing around a webpage suddenly notice that a piece of text is actually a link. This happened to me on coding horror. If you mouse over a blog posting title it's actually a LINK! It's not underlined and it's not the same color as any other links on the page.
So when I was doing my blog I wanted all the links to be underlined and the same color.. but it looked like crap so I though ok, I'll at least have them all underlined.. I still haven't dug deep enough to figure out how to convert all the sidebar links to be underlined but I'm getting there.
At any rate I still feel fairly bad about having my blog post titles as links but having them white instead of the blue.
I like blue links. It was the default color for all hyperlinks for years.. and red was the default color for previously visited links.. argh.. that reminds me! I have to figure out why previously visited links don't change color.. they should be a shade of red but it's not working.. grumble grumble.... Ok, time to do that.
Basically this whole re-design was prompted by the fact that the previous template didn't grow horizontally with the size of the web browser. On my large monitors with large text it looked completely silly; there was this skinny column of text centered right in the middle of the page. sigh..
Well, what I did was I took a blogger template that actually did change its size depending on the horizontal width of the web browser window and tweak the heck out of it.
Here's the original:
Yeah.. I changed it a bit. I consider it an improvement based just on the fact that there's no freaking orange in it. :-) Orange makes me an sad panda. If I remeber correctly the tangerine iMac was the least popular so it looks like I'm not alone.
One of the things I was unhappy with is there's no way of dividing up the space between two widgets in such a way that one widget takes up a fixed amount of space in pixels and another takes up whatever is left over. Not sure why that is since I seem to remember doing something like that in the old do-everything-with-tables days.. Although I might be confusing HTML table layout with GridBagLayout.
When doing this site redesign I ran into the links-that-don't-look-like-links problem. Essentially the only way you can tell the what is a link on a web page and what isn't is by either mousing over it and noticing your cursor changes or by looking at it and seeing it's a link because it's a different color or because it's underlined. The thing is, the default blogger template don't do this consistently. Links come in at least 3 different colors. Some are underlined and some aren't. In ye olden days this wasn't a problem. Site couldn't over-ride the link color. But now they can and what's worse is CSS actually allows you do have a different linking style for every piece of text and widget on the screen. It's often happened to me that I'll be mousing around a webpage suddenly notice that a piece of text is actually a link. This happened to me on coding horror. If you mouse over a blog posting title it's actually a LINK! It's not underlined and it's not the same color as any other links on the page.
So when I was doing my blog I wanted all the links to be underlined and the same color.. but it looked like crap so I though ok, I'll at least have them all underlined.. I still haven't dug deep enough to figure out how to convert all the sidebar links to be underlined but I'm getting there.
At any rate I still feel fairly bad about having my blog post titles as links but having them white instead of the blue.
I like blue links. It was the default color for all hyperlinks for years.. and red was the default color for previously visited links.. argh.. that reminds me! I have to figure out why previously visited links don't change color.. they should be a shade of red but it's not working.. grumble grumble.... Ok, time to do that.
Tuesday, November 20, 2007
Song...
So, for about three weeks I've had this damn song stuck in my head. I heard it on the radio and remembered the words to the chorus. I figured in this day and age I'd be able to find it again using google or one of those fancy searches we have these days. Nope.
Ok.. so now I'm thinking that I heard the words wrong.. as sometimes happens.. I'm going through every possible thing it could be and still No dice. Then, just as I'm about to leave to go to rehearsals, I hear it on the radio again. I quickly look it on on the radio station's recently played list (thank the internet for that one) and bingo.
The track is called "BIG CHANGE". Presumably it's in all caps because you need to yell it. !!!BIG CHANGE!!!. apparently it's coming. Who knew?
Ok.. so now I'm thinking that I heard the words wrong.. as sometimes happens.. I'm going through every possible thing it could be and still No dice. Then, just as I'm about to leave to go to rehearsals, I hear it on the radio again. I quickly look it on on the radio station's recently played list (thank the internet for that one) and bingo.
The track is called "BIG CHANGE". Presumably it's in all caps because you need to yell it. !!!BIG CHANGE!!!. apparently it's coming. Who knew?
Sunday, November 18, 2007
"Goating" is evil
Goddam it. First thedailywtf and now Jeff. Nothing says "I am purile" more than goating.
For the uninitiated goating is the pulling of some prank on someone who has left their computer unattended without locking it. I used to see things like this when someone would forget to log-off their station when I was in university. If you did that in a public computer lab anyone could walk up and use your terminal logged in as you.
This was actually becoming something of a problem since, at least initially, there weren't that many terminal available. If someone forgot to log-off, the screen saver would quickly kick in and then you couldn't log off someone until the full timeout which was something fairly large. When the labs were really busy it we often asked an admin to kill their ghost session.
For a while there was much pranking but it became clear that many people took this sort of thing extremely personally and so I, and a few others, started to log off people who left there terminal unattended before anyone else could get at them occasionally sending an email off to them mentioning that they forgot to logout and this was a bad idea etc.. This played out well because one time, I don't know what I was thinking, but I neglected to log out and was saved by one of the people I had helped earlier.
I am against pranking for multiple reasons.
The first is it's a big waste of time. Some of these pranks are elaborate and take time to undo. That's wasted time. I dislike spending 45 seconds at a traffic like I dislike people spending 5 minutes screwing with my setting and another 5 me tracking them down and fixing them.
The second is it can alienate those with different cultures, backgrounds or mindsets. If you're doing this sort of thing with a close knit group of friends that's fine. They've all agreed to it. Dragging arbitrary people into it can hurt feelings and breed distrust and bad blood amongst the team.
The third is it's absurdly unprofessional.
There's also this notion that Jeff mentioned that this sort of prank is pulled for someone's good. I'd believe that if those doing the pranks didn't have so much fun doing them.
Many places don't need this sort of security policy anyway. It's just creating stress and conflict where it's not needed.
RE: http://www.codinghorror.com/blog/archives/000997.html
For the uninitiated goating is the pulling of some prank on someone who has left their computer unattended without locking it. I used to see things like this when someone would forget to log-off their station when I was in university. If you did that in a public computer lab anyone could walk up and use your terminal logged in as you.
This was actually becoming something of a problem since, at least initially, there weren't that many terminal available. If someone forgot to log-off, the screen saver would quickly kick in and then you couldn't log off someone until the full timeout which was something fairly large. When the labs were really busy it we often asked an admin to kill their ghost session.
For a while there was much pranking but it became clear that many people took this sort of thing extremely personally and so I, and a few others, started to log off people who left there terminal unattended before anyone else could get at them occasionally sending an email off to them mentioning that they forgot to logout and this was a bad idea etc.. This played out well because one time, I don't know what I was thinking, but I neglected to log out and was saved by one of the people I had helped earlier.
I am against pranking for multiple reasons.
The first is it's a big waste of time. Some of these pranks are elaborate and take time to undo. That's wasted time. I dislike spending 45 seconds at a traffic like I dislike people spending 5 minutes screwing with my setting and another 5 me tracking them down and fixing them.
The second is it can alienate those with different cultures, backgrounds or mindsets. If you're doing this sort of thing with a close knit group of friends that's fine. They've all agreed to it. Dragging arbitrary people into it can hurt feelings and breed distrust and bad blood amongst the team.
The third is it's absurdly unprofessional.
There's also this notion that Jeff mentioned that this sort of prank is pulled for someone's good. I'd believe that if those doing the pranks didn't have so much fun doing them.
Many places don't need this sort of security policy anyway. It's just creating stress and conflict where it's not needed.
RE: http://www.codinghorror.com/blog/archives/000997.html
Sunday, November 11, 2007
Exponents, ram, 64-bits...
Ok, I really should be doing something else but I'm going to take a moment to give my two cents on this topic:
http://www.codinghorror.com/blog/archives/000994.html
In it Jeff Atwood ponders the move to 64-bits. Apparently he's a bit surprised at the speed at which the 32-bit addressing limit is becoming a problem. Actually I'm surprised it hasn't become a problem sooner.
I seem to remember, back in the old days, when I first started computing, machines came with 4 megs o ram. This was a mac, BTW. Macs were used quite allot for Photoshop and Photoshop demanded stupid amounts of RAM.. I mean you could have 32 or 64 megs of RAM and it still wouldn't be enough.. OMG!
The reason I mention this is because what an average PC user is running is very tiny compared to what people who actually use their machines for RAM intensive things use.
Currently I work on an image viewer program called InteleViewer. The thing is our image viewer likes RAM. I mean, we're viewing large images and we have to remain responsive while downloading the JPEG versions of several thousands images and decompressing them and witting them to disk... oh and did I mention we're using Java.
Yeah, yeah.. Believe me it is possible. What's crazy is we can do this with as little as 200 megs of RAM. It's not super speedy but it's usable.
At any rate, the doctors using our product like speed and will purchase anything to make it faster. Naturally, if they buy a workstation, one of the cheapest upgrades they can do is to buy more RAM. The more RAM then less hard drive swapping our viewer needs to do. Oh!. and if you want to view the stack of images in 3-D you better have enough RAM to fit the whole thing in RAM because we can't do much in the way of swapping when you can be looking at the stack of pixels from any angle.
RAM good. InteleViewer want more RAM!
We started to bump up against this RAM limit thing a while ago.. about the same time Apple was shifting to 64-bits. I can't remember what state we're in when it comes to the java virtual machine but the situation with window vis-a-vis 64 bits is shockingly ridiculous.
A 64-bit OS should not treated as some sort of pro feature. During a transition you want to minimize the pain. 64-bit OSes is a problem, a bug not a *&%$ feature upgrade.
Microsoft's attempt to split their 64-bit OS from their 32-bit OS has probably made the transition more difficult because with fewer 64-bit OSes floating around there's less incentive to make better, more mature drivers.
I'm an extremely gunshy when it comes upgrading so my tendency is to wait quite a long time because getting the new OS. I've only recently upgraded to XP from Windows 2000. I'm a bit cheesed that I'm not going to be able to run yesterday's OS with lots of memory because the transition only became really feasible recently.
At any rat, I'm going to stick with windows XP and 32-bit drivers for as long as I can. Not only does the old stuff tend to be more stable but it also runs faster and uses less of my system resources, which make everything else run faster too. From experience I've found that my idea of "speedy" and "stable" is a bit higher then the average bleeding edger...
http://www.codinghorror.com/blog/archives/000994.html
In it Jeff Atwood ponders the move to 64-bits. Apparently he's a bit surprised at the speed at which the 32-bit addressing limit is becoming a problem. Actually I'm surprised it hasn't become a problem sooner.
I seem to remember, back in the old days, when I first started computing, machines came with 4 megs o ram. This was a mac, BTW. Macs were used quite allot for Photoshop and Photoshop demanded stupid amounts of RAM.. I mean you could have 32 or 64 megs of RAM and it still wouldn't be enough.. OMG!
The reason I mention this is because what an average PC user is running is very tiny compared to what people who actually use their machines for RAM intensive things use.
Currently I work on an image viewer program called InteleViewer. The thing is our image viewer likes RAM. I mean, we're viewing large images and we have to remain responsive while downloading the JPEG versions of several thousands images and decompressing them and witting them to disk... oh and did I mention we're using Java.
Yeah, yeah.. Believe me it is possible. What's crazy is we can do this with as little as 200 megs of RAM. It's not super speedy but it's usable.
At any rate, the doctors using our product like speed and will purchase anything to make it faster. Naturally, if they buy a workstation, one of the cheapest upgrades they can do is to buy more RAM. The more RAM then less hard drive swapping our viewer needs to do. Oh!. and if you want to view the stack of images in 3-D you better have enough RAM to fit the whole thing in RAM because we can't do much in the way of swapping when you can be looking at the stack of pixels from any angle.
RAM good. InteleViewer want more RAM!
We started to bump up against this RAM limit thing a while ago.. about the same time Apple was shifting to 64-bits. I can't remember what state we're in when it comes to the java virtual machine but the situation with window vis-a-vis 64 bits is shockingly ridiculous.
A 64-bit OS should not treated as some sort of pro feature. During a transition you want to minimize the pain. 64-bit OSes is a problem, a bug not a *&%$ feature upgrade.
Microsoft's attempt to split their 64-bit OS from their 32-bit OS has probably made the transition more difficult because with fewer 64-bit OSes floating around there's less incentive to make better, more mature drivers.
I'm an extremely gunshy when it comes upgrading so my tendency is to wait quite a long time because getting the new OS. I've only recently upgraded to XP from Windows 2000. I'm a bit cheesed that I'm not going to be able to run yesterday's OS with lots of memory because the transition only became really feasible recently.
At any rat, I'm going to stick with windows XP and 32-bit drivers for as long as I can. Not only does the old stuff tend to be more stable but it also runs faster and uses less of my system resources, which make everything else run faster too. From experience I've found that my idea of "speedy" and "stable" is a bit higher then the average bleeding edger...
Frustrations, theater etc..
Between rehearsing for the new play I'm in, work and the usual madness of everyday life I've not really had an opportunity to compose anything of note for this space.
That said, I do have a large article (and I mean embarrassingly large article) which goes into how to document code in an effective way. I wrote it because I kept having deja-vu when it came to code comments. The things I wanted to know weren't there and what was commented was completely obvious. I was also getting the vibe from many on the team that code commenting was completely pointless, which was quite obviously bogus since Sun has managed to provide some very decent java doc comments for their code. What I have done is kept track of which things I want to know when I approach new code and put them all into a large document in a check-list sort of form.
The document became rather large when I realized that many of the items in the list only really made sense if you knew how to write good code. As a result I needed to explain how to write the piece of code in a format that doesn't require code comments before getting into what to comment. In fact, many of the items in the list can be thought of as warnings of the form: "Watch out! This code is not written how you think!".
In the end the guide is as much about how to code well as it is about how to document well. It's quite a nice read.
Anyways, that will be coming up as soon as I can figure out how to translate the behemoth into blog form.
Ok, gotta get back to learning my lines. See ya.
That said, I do have a large article (and I mean embarrassingly large article) which goes into how to document code in an effective way. I wrote it because I kept having deja-vu when it came to code comments. The things I wanted to know weren't there and what was commented was completely obvious. I was also getting the vibe from many on the team that code commenting was completely pointless, which was quite obviously bogus since Sun has managed to provide some very decent java doc comments for their code. What I have done is kept track of which things I want to know when I approach new code and put them all into a large document in a check-list sort of form.
The document became rather large when I realized that many of the items in the list only really made sense if you knew how to write good code. As a result I needed to explain how to write the piece of code in a format that doesn't require code comments before getting into what to comment. In fact, many of the items in the list can be thought of as warnings of the form: "Watch out! This code is not written how you think!".
In the end the guide is as much about how to code well as it is about how to document well. It's quite a nice read.
Anyways, that will be coming up as soon as I can figure out how to translate the behemoth into blog form.
Ok, gotta get back to learning my lines. See ya.
Saturday, October 13, 2007
P180 vs PowerMac G4 MDD
For some reason two people I know recently decided to go out and buy a new PC. They decided to get a desktop PC and to have it custom built for them. When you have a PC custom built you need to specify every part like which CPU, motherboard, Video card etc.. It's a whole lot of fun and one can spend days and weeks mulling over benchmarks and spec sheets to.. well.. just to understand Asus' catalog of motherboards. In any case it's alot of fun and at least one of my favorite bloggers has made the argument that it good for programmers (at least) to build their own PC so that they have a deeper understanding of their tools. This is totally ridiculous of course. If it were true I'd feel some need to make use of my VHDL skill and build up my own CPU or plunge back into the world of assembly of C every 5 years... The reason you want to build your own PC (or have it built for you) is because it's fun dammit!.. A very geeky kind of fun but fun non the less.
(I can't bring up custom computer building without mentioning arstechnica's computer buying guide. Every time I've ever made a custom PC I've done the week of so of research and ended up with exactly what arstechnica recommends. Well, disturbingly close anyway.)
At any rate, the point I was getting to was the question of which case to buy came up in conversation. These days it seems impossible to talk about computer cases without mentioning the mighty P180 case from Antec (or the new, updated P182). The Antec P180/P182 is widely thought of as one of the best if not the best case on the market. One of the most important about this case is the designers made quite allot of effort to make it --QUIET--! I own one. It's quite nice. The doors and sides of the thing are made of some sort of sound dampening material. The drives and fans are mounted on vibration dampening mounts.. It's well thought out.
That said, it doesn't *look* very exceptional. The case comes in black and silver. I bought the silver version and I think it looks like a mini fridge with the front door shut. Needing a name for it on the DNS, I called the it "the fridge". Funny eh? no? well never mind then!
anyways.. So the P180 I have is good and all but I can't help but compare it to the PowerMacs G4 MDD case. I mean they sit side by side on the floor next to me so it's hard to resist the urge.
When I first brought the P180 into the house I was impressed at its height. This is a full tower case. Really it's in a whole different class than the MDD since the MDD (MDD stands for mirror door drive just in case you were wondering) is a half height tower or Mini tower as some people call them. The youngens I think call them that. Damn things keep playing in my yard. Get out of my yard you rascals. Wait, I don't have a yard.. never mind.
The PowerMacs G4 MDD is freaking fantastic. To open it you just use the handle on the side and the whole side door comes open like this:
Notice the lack of cables? The cables are all built into the case. They took the time of threading all the big fat IDE cables through the case. I love it. The case has space for 4 hard disks and two optical drives. Putting in new HDs is incredibly easy and you don't have to worry about the big phat IDE cables getting in the way.. did I mention that?
Upgrading the RAM is easy, you just open the door and the slots are laying there in plain view uncluttered by cabling on the open door. Just plunk in RAM and your good to go.
The computer even runs with the door open. It doesn't like it but it runs.
The experience with the P180 much like any other case in this respect. First there's the screws:
Yep, I took a picture of the screws. They can be undone without a screwdriver but, meh, I didn't need to unscrew anything with the mac. It has a handle with a metal latch.
Then you have to slip the big door off like this:
.. just like every other PC case.. booo.
oh and be very, very careful when you do this. The first time I did this I was having trouble getting the door off because the monkey that made my machine put the side panel on such that some of the panel's slotting hook thingies (technical term) were on properly and some of them weren't. The panel was really on there tight.
So what I do is I grabs the front of the case and pulls the side panel with all my might... you know.. the way you do when the door is stuck. BAM! The damn thing exploded in my hand.. bits of the door hinge when flying all over the room. "%^*&(^!" I yells.
(The picture it a view of the front of the case from the top. You can see the optical drive in the right of the image.)
The P180 has a big front door on it. The idea is the big front door covers up the drives and air intakes and such so the sound is muffled. This door is on a double hinge that lets you fold the door out of the way.. The first picture on the blog shows you what the computer looks like with the door folded to its side. Unfortunately the double hinge is a bit weak. If you put pressure on it it breaks into many pieces. (5 actually).
So.. I super glued all the pieces back together.. That didn't work very well.
and sent an email to antec that basically said
"Um.. I broke the door, can you send me a new one of those plastic door hinge bits."
They sent a new door so I'm happy again but I leave you with the warning: "The door hinge on the P180/P182 is weak. Don't apply any pressure on it or you'll break the hinge into 5 pieces!".. That said it looks like Antec will email you a new one free of charge if you or a former friend breaks the door .. presumably within some sort of warranty period that I should now about but haven't bothered to read. :-)
Anyways.. inside the case looks like this:
.. well... it looks like that if you happen to have a computer inside it.
The P180 puts the hard drives in the little removable compartments so mounting them is easier. It's no MDD but it will do..
See the circle thing that looks like a keychain ring? If you undo the screw underneath and if you unhook it and pull the whole thing comes out. Very nice.
Dear Antec,
Please get an old PowerMac G4 MDD and make a case that has the sound dampening of the P180 but the convenient access of the MDD and I will declare it to be the ultimate case.
Sincerely,
me
(I can't bring up custom computer building without mentioning arstechnica's computer buying guide. Every time I've ever made a custom PC I've done the week of so of research and ended up with exactly what arstechnica recommends. Well, disturbingly close anyway.)
At any rate, the point I was getting to was the question of which case to buy came up in conversation. These days it seems impossible to talk about computer cases without mentioning the mighty P180 case from Antec (or the new, updated P182). The Antec P180/P182 is widely thought of as one of the best if not the best case on the market. One of the most important about this case is the designers made quite allot of effort to make it --QUIET--! I own one. It's quite nice. The doors and sides of the thing are made of some sort of sound dampening material. The drives and fans are mounted on vibration dampening mounts.. It's well thought out.
That said, it doesn't *look* very exceptional. The case comes in black and silver. I bought the silver version and I think it looks like a mini fridge with the front door shut. Needing a name for it on the DNS, I called the it "the fridge". Funny eh? no? well never mind then!
anyways.. So the P180 I have is good and all but I can't help but compare it to the PowerMacs G4 MDD case. I mean they sit side by side on the floor next to me so it's hard to resist the urge.
When I first brought the P180 into the house I was impressed at its height. This is a full tower case. Really it's in a whole different class than the MDD since the MDD (MDD stands for mirror door drive just in case you were wondering) is a half height tower or Mini tower as some people call them. The youngens I think call them that. Damn things keep playing in my yard. Get out of my yard you rascals. Wait, I don't have a yard.. never mind.
The PowerMacs G4 MDD is freaking fantastic. To open it you just use the handle on the side and the whole side door comes open like this:
Notice the lack of cables? The cables are all built into the case. They took the time of threading all the big fat IDE cables through the case. I love it. The case has space for 4 hard disks and two optical drives. Putting in new HDs is incredibly easy and you don't have to worry about the big phat IDE cables getting in the way.. did I mention that?
Upgrading the RAM is easy, you just open the door and the slots are laying there in plain view uncluttered by cabling on the open door. Just plunk in RAM and your good to go.
The computer even runs with the door open. It doesn't like it but it runs.
The experience with the P180 much like any other case in this respect. First there's the screws:
Yep, I took a picture of the screws. They can be undone without a screwdriver but, meh, I didn't need to unscrew anything with the mac. It has a handle with a metal latch.
Then you have to slip the big door off like this:
.. just like every other PC case.. booo.
oh and be very, very careful when you do this. The first time I did this I was having trouble getting the door off because the monkey that made my machine put the side panel on such that some of the panel's slotting hook thingies (technical term) were on properly and some of them weren't. The panel was really on there tight.
So what I do is I grabs the front of the case and pulls the side panel with all my might... you know.. the way you do when the door is stuck. BAM! The damn thing exploded in my hand.. bits of the door hinge when flying all over the room. "%^*&(^!" I yells.
(The picture it a view of the front of the case from the top. You can see the optical drive in the right of the image.)
The P180 has a big front door on it. The idea is the big front door covers up the drives and air intakes and such so the sound is muffled. This door is on a double hinge that lets you fold the door out of the way.. The first picture on the blog shows you what the computer looks like with the door folded to its side. Unfortunately the double hinge is a bit weak. If you put pressure on it it breaks into many pieces. (5 actually).
So.. I super glued all the pieces back together.. That didn't work very well.
and sent an email to antec that basically said
"Um.. I broke the door, can you send me a new one of those plastic door hinge bits."
They sent a new door so I'm happy again but I leave you with the warning: "The door hinge on the P180/P182 is weak. Don't apply any pressure on it or you'll break the hinge into 5 pieces!".. That said it looks like Antec will email you a new one free of charge if you or a former friend breaks the door .. presumably within some sort of warranty period that I should now about but haven't bothered to read. :-)
Anyways.. inside the case looks like this:
.. well... it looks like that if you happen to have a computer inside it.
The P180 puts the hard drives in the little removable compartments so mounting them is easier. It's no MDD but it will do..
See the circle thing that looks like a keychain ring? If you undo the screw underneath and if you unhook it and pull the whole thing comes out. Very nice.
Dear Antec,
Please get an old PowerMac G4 MDD and make a case that has the sound dampening of the P180 but the convenient access of the MDD and I will declare it to be the ultimate case.
Sincerely,
me
Tuesday, October 2, 2007
Yeah yeah we know.
If you have an opportunity to get feedback never dismiss it.
Getting feedback from customers is never straight forward. The most important thing, I believe, is that you only get a fraction of the feedback you think you're getting.
When it comes to quantity of user feedback I generally operate on a rule which I discovered while writing Myster. The rule is, you won't get feedback unless your app doesn't run.
This is an exaggeration and simplifications, but it's there to underline a point. Customers don't exist to provide you with feedback so when they feel compelled to give you feedback it usually because they've found something that makes them yell. It can be because you app doesn't work at all, it could be that you've taken away something someone had based their work flow on, and it could be because your app is missing some capability that's vital for its continued existence in that firm.
Now, just because someone isn't yelling at you, doesn't mean that you're product isn't bad. Clients/users don't give feedback if the problem is not large enough to trigger a strong enough emotional reaction to bother complaining about. If your customers are used to dancing bearware then they aren't going to report things that aren't really, really broken. If you're customers are used to something really refined and you suddenly make a big mistake and disable or remove something they really liked then you'll get feedback. In one case your program really sucks so you'd expect to get lots of feedback on where it's broken.. In the other case your program fulfills their needs even with the change and you're still getting angry feedback. This happens because feedback is triggered based on the emotional reaction of events.
There exist some companies that use only customer feedback to guide their development efforts. These companies are usually fairly successful with this strategy however this represents a minimum level of competence. As someone who wants to present the best possible piece of software you need to be more proactive. Solicit feedback wherever possible. Find time to just sit and watch your client working on your software. There are really big gains possible for software that really fits the client needs. If you're software doesn't convert your user base into a heard of zealots there's still room for improvements. :-)
Before I leave this topic i want to point out something. The more "advanced" your users the more likely they are to feel confident to give you feedback. For example, if you're product is aimed at the technical community you can expect to get alot more feedback per user than if you're aiming at the consumer space. Also, users that try out your product early tend to be of the more adventurous type and will also tend to give you lots of feedback as well. Be aware that as your product matures your user base will naturally tend to shift to more casual users (and the user base will get used to your product's quirks) so feedback starts to dissipate later in that product's cycle.
ok see ya.
Getting feedback from customers is never straight forward. The most important thing, I believe, is that you only get a fraction of the feedback you think you're getting.
When it comes to quantity of user feedback I generally operate on a rule which I discovered while writing Myster. The rule is, you won't get feedback unless your app doesn't run.
This is an exaggeration and simplifications, but it's there to underline a point. Customers don't exist to provide you with feedback so when they feel compelled to give you feedback it usually because they've found something that makes them yell. It can be because you app doesn't work at all, it could be that you've taken away something someone had based their work flow on, and it could be because your app is missing some capability that's vital for its continued existence in that firm.
Now, just because someone isn't yelling at you, doesn't mean that you're product isn't bad. Clients/users don't give feedback if the problem is not large enough to trigger a strong enough emotional reaction to bother complaining about. If your customers are used to dancing bearware then they aren't going to report things that aren't really, really broken. If you're customers are used to something really refined and you suddenly make a big mistake and disable or remove something they really liked then you'll get feedback. In one case your program really sucks so you'd expect to get lots of feedback on where it's broken.. In the other case your program fulfills their needs even with the change and you're still getting angry feedback. This happens because feedback is triggered based on the emotional reaction of events.
There exist some companies that use only customer feedback to guide their development efforts. These companies are usually fairly successful with this strategy however this represents a minimum level of competence. As someone who wants to present the best possible piece of software you need to be more proactive. Solicit feedback wherever possible. Find time to just sit and watch your client working on your software. There are really big gains possible for software that really fits the client needs. If you're software doesn't convert your user base into a heard of zealots there's still room for improvements. :-)
Before I leave this topic i want to point out something. The more "advanced" your users the more likely they are to feel confident to give you feedback. For example, if you're product is aimed at the technical community you can expect to get alot more feedback per user than if you're aiming at the consumer space. Also, users that try out your product early tend to be of the more adventurous type and will also tend to give you lots of feedback as well. Be aware that as your product matures your user base will naturally tend to shift to more casual users (and the user base will get used to your product's quirks) so feedback starts to dissipate later in that product's cycle.
ok see ya.
Friday, September 28, 2007
The Linux loop
I'm using X but something does work.
Use Y
I am using Y but something different doesn't work.
Use Z
I am using Z but something else doesn't work.
Use X
etc...
Use Y
I am using Y but something different doesn't work.
Use Z
I am using Z but something else doesn't work.
Use X
etc...
Wednesday, September 26, 2007
Data and type safety.
What's the difference between :
and
?
For that matter what's the difference between:
and
?
What is the point of creating an class for an object that contains one piece of data? Well...
In languages with a compiler enforced typing system, like java, it's good practice to leverage the type system wherever possible. Creating types for things like paths or ids can prove very useful. By using the type system the compiler will help you catch more errors earlier. This make the code easier to write and maintain.
When should we create a class for a type? If you look at InteleViewer's code base, we don't always substitute implicit types (of the form "int id") with explicit types (of the form "ImageId id"). The main reason is that sometimes it doesn't give us anything and the class can add a layer of bureaucracy that just serves to make the code more complex.
There's really a set of cases that determine how important/useful it is to turn an implicit type into an explicit type.
Cases:
- You have a 1 variable primitive type in which all values possible for the primitive value are allowed values for the implicit type but you don't do anything that actually needs to rely on the type.
You generally don't need to worry about this case unless your app is getting confusing enough that you want to use the compiler to check your code.
- You have a 1 variable primitive type in which all values possible for the primitive value are allowed values for the implicit type and you are often doing operations on it (formating, funky math etc..)
If you're doing operations on it (especially operations that take an object of the same type as a parameter. Like addition, for example.) then *always* make a class for that type. Dates are fairly good examples of this. They have a mapping to longs but we generally want to manipulate them in respect to each other. File objects are another good example (I think every value String more or less makes sense as a path).
- You have a 1 variable primitive type but there are some values that are valid for the primitive type but not valid for the implicit type.
I generally make a class for this if the value escapes from the object/code module it's in. The reason is that things can get dangerous across module boundaries.
You generally want to defend against badly formated implicit types across functional boundaries so that errors are caught early. Having an explicit type guarantees that the data you're getting as a param is well formed.
- You have a 2 (or more) variable implicit type.
Always make a class for this case.
Oh, one last thing.. Always make your data types immutable if at all possible. See ya.
String path = ...
and
File path = ...
?
For that matter what's the difference between:
int imageId = ...
and
ImageId imageId = ...
?
What is the point of creating an class for an object that contains one piece of data? Well...
In languages with a compiler enforced typing system, like java, it's good practice to leverage the type system wherever possible. Creating types for things like paths or ids can prove very useful. By using the type system the compiler will help you catch more errors earlier. This make the code easier to write and maintain.
When should we create a class for a type? If you look at InteleViewer's code base, we don't always substitute implicit types (of the form "int id") with explicit types (of the form "ImageId id"). The main reason is that sometimes it doesn't give us anything and the class can add a layer of bureaucracy that just serves to make the code more complex.
There's really a set of cases that determine how important/useful it is to turn an implicit type into an explicit type.
Cases:
- You have a 1 variable primitive type in which all values possible for the primitive value are allowed values for the implicit type but you don't do anything that actually needs to rely on the type.
You generally don't need to worry about this case unless your app is getting confusing enough that you want to use the compiler to check your code.
- You have a 1 variable primitive type in which all values possible for the primitive value are allowed values for the implicit type and you are often doing operations on it (formating, funky math etc..)
If you're doing operations on it (especially operations that take an object of the same type as a parameter. Like addition, for example.) then *always* make a class for that type. Dates are fairly good examples of this. They have a mapping to longs but we generally want to manipulate them in respect to each other. File objects are another good example (I think every value String more or less makes sense as a path).
- You have a 1 variable primitive type but there are some values that are valid for the primitive type but not valid for the implicit type.
I generally make a class for this if the value escapes from the object/code module it's in. The reason is that things can get dangerous across module boundaries.
You generally want to defend against badly formated implicit types across functional boundaries so that errors are caught early. Having an explicit type guarantees that the data you're getting as a param is well formed.
- You have a 2 (or more) variable implicit type.
Always make a class for this case.
Oh, one last thing.. Always make your data types immutable if at all possible. See ya.
Sunday, September 23, 2007
Amazon vs Amazon. A catfight.. to the death!
The dollar (US) to dollar (CN) parity has been quite an interesting development recently. Unfortunately it means that Canadian resellers (like amazon.ca) are in danger of being over-run by more aggressive competitors like amazon.com. Consider these prices differences:
For
The Age of Turbulence, Alan Greenspan's latest literary masterpiece, is (at this writing) 26.46 CDN and only 20.99 USD
The story is much the same with DVDs too. Take, for example the BBC's fascinating documentary series on the life and times of other species throughout the universe, Doctor Who. The third series is 87.49 CDN at amazon.ca and only 69.99 USD at amazon.com.
With the Canadian dollar and US dollar parity how are Canadian resellers like amazon.ca going to compete with US resellers like amazon.com? While some might argue that this is only a short term inbalence and that the laws of economics will take care of things, I suspect otherwise! I would suggest massive, arbitrary government intervention to help Canadian resellers maintain their high prices.
P.S.: I am not a crackpot.
For
The Age of Turbulence, Alan Greenspan's latest literary masterpiece, is (at this writing) 26.46 CDN and only 20.99 USD
The story is much the same with DVDs too. Take, for example the BBC's fascinating documentary series on the life and times of other species throughout the universe, Doctor Who. The third series is 87.49 CDN at amazon.ca and only 69.99 USD at amazon.com.
With the Canadian dollar and US dollar parity how are Canadian resellers like amazon.ca going to compete with US resellers like amazon.com? While some might argue that this is only a short term inbalence and that the laws of economics will take care of things, I suspect otherwise! I would suggest massive, arbitrary government intervention to help Canadian resellers maintain their high prices.
P.S.: I am not a crackpot.
Tuesday, September 11, 2007
Stupid Linux/X-Windows tricks
Linux doesn't include a GUI component however most Linux distributions will ship with some form of X-Windows. X-Windows has a broken copy and paste implementation that has been the bane of my existence for the last 3 years.
I've just run into an example of how this brain damage makes my life unpleasant. I bring this up here because every time I mention that copy and paste is horribly broken on linux people deny it. Try this on your linux box.
I've opened about 20 files in gedit (a very simple editor that works as a normal window/mac program does.). I've written something in the first tab, copied it, closed the tab and tried to paste it into the next tab. It doesn't work. It doesn't work because you've closed the window that contains the thing you've selected. No other operating system does it this way. Other operating systems, at least conceptually, store whatever it was you copied to a system clipboard. You can then quit the application and your data is safe.
The current linux situation means that I have to copy and past my text into some other window or application so that my clipboard contents don't go away when I close the tab that used to contain the information. This is totally absurd. You want to know why I'm not an Linux fanboy? It's because of stuff like this.
I've just run into an example of how this brain damage makes my life unpleasant. I bring this up here because every time I mention that copy and paste is horribly broken on linux people deny it. Try this on your linux box.
I've opened about 20 files in gedit (a very simple editor that works as a normal window/mac program does.). I've written something in the first tab, copied it, closed the tab and tried to paste it into the next tab. It doesn't work. It doesn't work because you've closed the window that contains the thing you've selected. No other operating system does it this way. Other operating systems, at least conceptually, store whatever it was you copied to a system clipboard. You can then quit the application and your data is safe.
The current linux situation means that I have to copy and past my text into some other window or application so that my clipboard contents don't go away when I close the tab that used to contain the information. This is totally absurd. You want to know why I'm not an Linux fanboy? It's because of stuff like this.
Thursday, September 6, 2007
How to do preferences
Recently, codinghorror ran a blog entry under the title of Was The Windows Registry a Good Idea?
http://www.codinghorror.com/blog/archives/000939.html
Now I don't want to be caught dead defending the registry so I won't defend it. What I want to tackle are the various comments to the blog posting which suggest alternatives that are clearly inferior or, alternatively, blame the registry for other issues entirely.
First up Jeff Atwood himself:
I too have had problems with lower filters causing problems. This isn't the fault of the registry this is a design flaw in windows.
Installers have the ability to make changes to settings that are not available to the user. In my case an installer (probably iTunes) added a lower filter then the un-installer neglected to remove it. It's a good thing I had read Mark Russinovich's blog posting which, among other things, mentions that..
..other wise I would have assumed that the CD-ROM had died. The lower filter registry key problem is one of not having any interface in the GUI proper for managing lower filters properly.. OR that the system is so blinding stupid that it can't .. well.. I'll let Mark explain this one:
Essentially if you have a CD-ROM with a lower filter thingy and you delete the file on the HD containing the filter thingy's code, then windows tends to disable the entire device and not give the user any useful feedback as to what is going on. This, not the registry, was Jeff's problem.
Ways of screwing up user preferences. Here's a bad suggestion:
User's preferences can never go in the same directory as the app. If they did the application would not work with multiple users. Consider the two cases:
1 - Multiple users with multiple accounts sharing the same app. Every time one user would change a pref, all the other users would get it.
2 - A user running an app off a remote hard disk. Like the previous case all setting would be shared.. oh and there are also concurrency issues that will result in corrupt preferences.
Here's an note of interest: Applications on the mac used to store their settings inside themselves. They actually wrote their settings into their application files. This isn't as crazy as it sounds because the Mac had a way of storing resources like strings, images etc.. as real resources (as opposed to ad-hoc twiddling inside a binary file) inside the application program binaries and these things could be modified while the application was running. This was deprecated when appletalk became available and people started launching applications over a network. It caused race conditions if two people were running the same app and modified a setting at the same time.. oh and both users would interfere with each others settings.
(It actually had quite a few other problems too, not the least of it was was the possibility of corrupting the application file.)
Here's a point about speed:
Someone once told me the key to bending a spoon is to realize there is not spoon. It's much the same problem when optimizing code.
There's no significant overhead with using a "ini" or "xml" file approach. This is especially true when it comes to making changes and writing them out.
What you do is you read the whole config file into ram and turn it into a data structure. You then use the data structure and make changes to it. You can then re-encode this data structure back into an ini file (or whatever) and flush it to disk at your leisure. I prefer to do flush it on a separate thread so it doesn't block the thread making the changes. You can also extend this system to automatically save/flush the preferences for you by simply detecting any changes made to the data structure, setting a timer for, say, 30 second (to batch any changes that might occur slightly after the first change), then write out the data structure in a background thread. It's loads of fun. I've implemented such a beast at least twice now. The net result is all your accesses to the prefs are super fast 'cause they are in memory..
Multiple users will not hit the same file because you've got one files per user per app as I've already mentioned.
No that would be "n" points of failure skillfully dispatched with one stroke. You can simulate this in another way by getting a gun and shooting your hard drive. Go ahead; try to find a way of saving preferences on a had-disk that gets around that baby.
The point of having different programs save their settings in different files is that if a program is, let's say, writing to the registry and the power goes out or it crashes, that it doesn't leave the entire registry file in an indeterminate state. Microsoft has had to almost re-invent the concept of a journaled file system in registry form to get around this little problem. No wonder it took them so long.
First off, no preferences file, whether it be .ini or XML should have to be edited by the user. It is up to the GUI application itself to offer a way of accessing its preferences.
Secondly, preferences file should not be scattered around! If they are then they will, most assuredly, not be in the right scope. That is, user preferences must in a place that is associated with the user so that users don't tread on each other. They cannot be in the same folder as the application.. They cannot be in the same folder as a neighboring application. They can not be one level up or one level down. They cannot be in any of these places for the same reason; a reason I explained above: multiple users. User preferences files must be in some sort of user specific folder.
Thirdly! (I thought I'd surprise you by using an exclamation mark on that one) As an operating system designer, you're going to want to offer an API for doing preferences. As has been made obvious by the comments posted to Jeff Atwood's blog, most programmers don't have a deep understanding of how to do preferences properly and, for that matter, writing a good preferences API is not an easy thing. Since your system is going to offer an preferences API, you might as well make the FORMAT for the files it produces standard. Why not XML? Yeah, yeah I know...
Given that UNIX is very poor to begin with that's pretty bad.
Yeah, ok so that was an off-topic unix jab.. I'm allowed 1.
This isn't so much the registry's fault as it is an asinine windows convention. Quite a few programs tend to store, what are really string resources or required configuration information in the registry. This belongs in applications DAMMIT!
I suspect this whole thing got started when you absolute needed to register things like dlls or go knows what other type of thing in the registry. You see, the windows not only stores user preferences but also stores system configuration information as well. One could argue that this was its original purpose, really, and it just grew to contain preferences information as well.
There is a huge habit of using installers and de-installer do play with system settings both in the registry and to add icons to the desktop, the start menu, that quick launcher thing etc.. or to do things like install DLLs or some other damn thing (low filters!). This is a major irritation on windows and the primary reason why installers exist. If you look at the mac, installers are rare. This is because mac programs tend to be self contained and there's no cultural convention for installers installing icons in the docs or desktop. In fact, one could argue that you treat applications as icons and just drag them to you applications folder to "install" them. This is much better than an installer system in which so much can go wrong. Even after years of work on the windows installer system it still sucks balls. Sure it's not hopelessly broken any more as it was in the windows 95/98 days but it's still really bad. Installation is something the Mac really gets right.
People did this sort of thing with our application InteleViewer. This configuration is not supported! If you do this with our apps then phone our support line we will not support you. In the end what we did was write the functionality people were trying to reproduce into our app in an decent, easy to use way. It's also far more powerful too since it does what you were trying to achieve in an explicitly supported way! It's called roaming user preferences and it rocks.
Also, I think I should point out the existence of LDAP.
Actually I agree with this one.. Come to think of it, the way MacOS X stores user preferences is exceptional. It does everything I've advocated here.
1 - It stores user preferences in the user's home folder under a special folder which is actually accessible to the user.
2 - Each app stores its preferences it their own files.
3 - The files is and XML format called a p-list.. There's even an editor with the development tools.. Which is good since while XML is, in theory, human readable/editable without an editor, no human would ever want to actually try.
4 - It's got a nice API baked in for dealing with application preferences.
5 - There's a well understood convention that no application should ever put a setting in its preference file that is essential to the functioning of that app.. All this to say that if you get fed up you can simply trash everything in the preferences folder and be assured that nothing will break.. You can also trash a spefic preferences file of a single malfunctioning app if you suspect it to be the source of problems.. I've actually done this many, many times while on the mac and it's amazing how often corrupted preferences files are problems.
6 - There's also a convention that you're not supposed to edit these preferences files by hand!!!!
Anyways... I suppose my main point here is that if you're going to try and build the ideal user preferences system you'd better bone up on the way the Mac does it.. Same deal if you want to look at creating a better installation system.
Ok I'm done..
http://www.codinghorror.com/blog/archives/000939.html
Now I don't want to be caught dead defending the registry so I won't defend it. What I want to tackle are the various comments to the blog posting which suggest alternatives that are clearly inferior or, alternatively, blame the registry for other issues entirely.
First up Jeff Atwood himself:
How appropriate. I post about the aggravating nature of the registry last night, today I get to "fix" a DVD-R drive that mysteriously fails to load with error code 39, by making this cryptic registry change:
http://forums.techguy.org/hardware/572840-dvd-burner-stopped-working-drivers.html
(see post #4 -- remove "LowerFilters" from the "{4D36E965-E325-11CE-BFC1-08002BE10318}" key under HKLM\System\CurrentControlSet\Control\Class)
Works like a champ now. :P
Jeff Atwood on August 29, 2007 06:10 PM
I too have had problems with lower filters causing problems. This isn't the fault of the registry this is a design flaw in windows.
Installers have the ability to make changes to settings that are not available to the user. In my case an installer (probably iTunes) added a lower filter then the un-installer neglected to remove it. It's a good thing I had read Mark Russinovich's blog posting which, among other things, mentions that..
"...Unfortunately, although you can view the names of registered filter drivers in the “Upper filters” and “Lower filters” entries of a device’s Details tab in Device Manager, there’s no administrative interface for deleting filters. Filter registrations are stored in the Registry under HKLM\System\CurrentControlSet\Enum so I opened Regedit and searched for $sys$ in that key. I found the entry configuring the CD’s lower filter: ..."
..other wise I would have assumed that the CD-ROM had died. The lower filter registry key problem is one of not having any interface in the GUI proper for managing lower filters properly.. OR that the system is so blinding stupid that it can't .. well.. I'll let Mark explain this one:
"When I logged in again I discovered that the CD drive was missing from Explorer. Deleting the drivers had disabled the CD. Now I was really mad. Windows supports device “filtering”, which allows a driver to insert itself below or above another one so that it can see and modify the I/O requests targeted at the one it wants to filter. I know from my past work with device driver filter drivers that if you delete a filter driver’s image, Windows fails to start the target driver. I opened Device Manager, displayed the properties for my CD-ROM device, and saw one of the cloaked drivers, Crater.sys (another ironic name, since it had ‘cratered’ my CD), registered as a lower filter:..."
Essentially if you have a CD-ROM with a lower filter thingy and you delete the file on the HD containing the filter thingy's code, then windows tends to disable the entire device and not give the user any useful feedback as to what is going on. This, not the registry, was Jeff's problem.
Ways of screwing up user preferences. Here's a bad suggestion:
"I personally like XML and ini configuration files, but they should be kept in the dir with the rest of the application's files IMO."
User's preferences can never go in the same directory as the app. If they did the application would not work with multiple users. Consider the two cases:
1 - Multiple users with multiple accounts sharing the same app. Every time one user would change a pref, all the other users would get it.
2 - A user running an app off a remote hard disk. Like the previous case all setting would be shared.. oh and there are also concurrency issues that will result in corrupt preferences.
Here's an note of interest: Applications on the mac used to store their settings inside themselves. They actually wrote their settings into their application files. This isn't as crazy as it sounds because the Mac had a way of storing resources like strings, images etc.. as real resources (as opposed to ad-hoc twiddling inside a binary file) inside the application program binaries and these things could be modified while the application was running. This was deprecated when appletalk became available and people started launching applications over a network. It caused race conditions if two people were running the same app and modified a setting at the same time.. oh and both users would interfere with each others settings.
(It actually had quite a few other problems too, not the least of it was was the possibility of corrupting the application file.)
Here's a point about speed:
"Yep, go with those ini files (or xml)!
Then deal with the performance hit of searching out keys or trying to update values. "
Someone once told me the key to bending a spoon is to realize there is not spoon. It's much the same problem when optimizing code.
There's no significant overhead with using a "ini" or "xml" file approach. This is especially true when it comes to making changes and writing them out.
What you do is you read the whole config file into ram and turn it into a data structure. You then use the data structure and make changes to it. You can then re-encode this data structure back into an ini file (or whatever) and flush it to disk at your leisure. I prefer to do flush it on a separate thread so it doesn't block the thread making the changes. You can also extend this system to automatically save/flush the preferences for you by simply detecting any changes made to the data structure, setting a timer for, say, 30 second (to batch any changes that might occur slightly after the first change), then write out the data structure in a background thread. It's loads of fun. I've implemented such a beast at least twice now. The net result is all your accesses to the prefs are super fast 'cause they are in memory..
"Don't forget to implement proper locking logic to handle multiple users hitting the same ini file."
Multiple users will not hit the same file because you've got one files per user per app as I've already mentioned.
"I'm sorry, I left off this gem:
"The registry is a single point of failure."
If you have a bunch of configuration files in a directory somewhere, and you delete those files... Isn't that a single point of failure?"
No that would be "n" points of failure skillfully dispatched with one stroke. You can simulate this in another way by getting a gun and shooting your hard drive. Go ahead; try to find a way of saving preferences on a had-disk that gets around that baby.
The point of having different programs save their settings in different files is that if a program is, let's say, writing to the registry and the power goes out or it crashes, that it doesn't leave the entire registry file in an indeterminate state. Microsoft has had to almost re-invent the concept of a journaled file system in registry form to get around this little problem. No wonder it took them so long.
"As others have pointed out, .INI files have their issues to (inconsistent format, can be scattered around, multi-threaded access), but *nix has made it work relatively well. That said, I pity the fool who has to look at sendmail.cf for the first time."
First off, no preferences file, whether it be .ini or XML should have to be edited by the user. It is up to the GUI application itself to offer a way of accessing its preferences.
Secondly, preferences file should not be scattered around! If they are then they will, most assuredly, not be in the right scope. That is, user preferences must in a place that is associated with the user so that users don't tread on each other. They cannot be in the same folder as the application.. They cannot be in the same folder as a neighboring application. They can not be one level up or one level down. They cannot be in any of these places for the same reason; a reason I explained above: multiple users. User preferences files must be in some sort of user specific folder.
Thirdly! (I thought I'd surprise you by using an exclamation mark on that one) As an operating system designer, you're going to want to offer an API for doing preferences. As has been made obvious by the comments posted to Jeff Atwood's blog, most programmers don't have a deep understanding of how to do preferences properly and, for that matter, writing a good preferences API is not an easy thing. Since your system is going to offer an preferences API, you might as well make the FORMAT for the files it produces standard. Why not XML? Yeah, yeah I know...
"Your article reminded me of this quote;-
Those who do not understand Unix are condemned to reinvent it, poorly.
-- Henry Spencer"
Given that UNIX is very poor to begin with that's pretty bad.
Yeah, ok so that was an off-topic unix jab.. I'm allowed 1.
"For instance, when I upgrade and reinstall Windows, most of the games I have installed on my secondary drive are instantly broken because they store cd-key and (redundant) path information in the registry. The game vendors' support teams will tell you to reinstall all your games and patches. Personally, I'd rather search forums and spelunk through the registry to manually recreate the two or three registry keys the game is looking for."
This isn't so much the registry's fault as it is an asinine windows convention. Quite a few programs tend to store, what are really string resources or required configuration information in the registry. This belongs in applications DAMMIT!
I suspect this whole thing got started when you absolute needed to register things like dlls or go knows what other type of thing in the registry. You see, the windows not only stores user preferences but also stores system configuration information as well. One could argue that this was its original purpose, really, and it just grew to contain preferences information as well.
There is a huge habit of using installers and de-installer do play with system settings both in the registry and to add icons to the desktop, the start menu, that quick launcher thing etc.. or to do things like install DLLs or some other damn thing (low filters!). This is a major irritation on windows and the primary reason why installers exist. If you look at the mac, installers are rare. This is because mac programs tend to be self contained and there's no cultural convention for installers installing icons in the docs or desktop. In fact, one could argue that you treat applications as icons and just drag them to you applications folder to "install" them. This is much better than an installer system in which so much can go wrong. Even after years of work on the windows installer system it still sucks balls. Sure it's not hopelessly broken any more as it was in the windows 95/98 days but it's still really bad. Installation is something the Mac really gets right.
"Ever tried to make a single entry in an ini-file readonly for the user?
I wish you lots of fun...
Only with the registry you could administer hundreds of machines in an easy and consistent way.
Only with the registry there is a way to double-click a .reg-file and enter its content automatic into the registry.
and on an on and on.
Registry is like a database, ini-files are like chaos."
People did this sort of thing with our application InteleViewer. This configuration is not supported! If you do this with our apps then phone our support line we will not support you. In the end what we did was write the functionality people were trying to reproduce into our app in an decent, easy to use way. It's also far more powerful too since it does what you were trying to achieve in an explicitly supported way! It's called roaming user preferences and it rocks.
Also, I think I should point out the existence of LDAP.
"...I do wish they'd adopt the same plist configuration format found in OSX, its XML based, very flexible and easy to work with.
Ini is kinda primitive when compared to plist, and Unix's convention of having no convention here is a poor idea."
Actually I agree with this one.. Come to think of it, the way MacOS X stores user preferences is exceptional. It does everything I've advocated here.
1 - It stores user preferences in the user's home folder under a special folder which is actually accessible to the user.
2 - Each app stores its preferences it their own files.
3 - The files is and XML format called a p-list.. There's even an editor with the development tools.. Which is good since while XML is, in theory, human readable/editable without an editor, no human would ever want to actually try.
4 - It's got a nice API baked in for dealing with application preferences.
5 - There's a well understood convention that no application should ever put a setting in its preference file that is essential to the functioning of that app.. All this to say that if you get fed up you can simply trash everything in the preferences folder and be assured that nothing will break.. You can also trash a spefic preferences file of a single malfunctioning app if you suspect it to be the source of problems.. I've actually done this many, many times while on the mac and it's amazing how often corrupted preferences files are problems.
6 - There's also a convention that you're not supposed to edit these preferences files by hand!!!!
Anyways... I suppose my main point here is that if you're going to try and build the ideal user preferences system you'd better bone up on the way the Mac does it.. Same deal if you want to look at creating a better installation system.
Ok I'm done..
Friday, August 31, 2007
The vacation principal
The vacation principal is: No matter how much work you get through per day and no matter how indispensable you think you are, you aren't because you can go on vacation for two weeks and nobody will notice.
"Oh, I wondered why I hadn't seen you around in a while."
The corollary is: relax.
"Oh, I wondered why I hadn't seen you around in a while."
The corollary is: relax.
Tuesday, August 21, 2007
Storms
Dear God,
I no longer live in the Bahamas.. In fact I never lived there.. Also, that storm as a bit too intense. Other than that you're right on track.
Thanx,
@
I no longer live in the Bahamas.. In fact I never lived there.. Also, that storm as a bit too intense. Other than that you're right on track.
Thanx,
@
Saturday, August 18, 2007
Folders within folders.
I think I'm going to make an effort to flatten the folder hierarchy in my "my documents" folder. It's a last ditch attempt to encourage myself to stop storing important files on the desktop... It won't work but it will make me feel better.
Friday, August 17, 2007
Spaghetti code is not as delicious as it sounds
Instilling code modules with omniscience is not good design practice.
Thursday, August 16, 2007
The weather
Dear God,
We have not had many thunderstorm this summer. I find them very entertaining. Please send more.
Sincerely,
@
We have not had many thunderstorm this summer. I find them very entertaining. Please send more.
Sincerely,
@
Thursday, July 19, 2007
Good attributes for a software engineer
I was browsing the internet recently on my electronic browse-board when I came across this list of good attributes for a software engineer. I though I would share it with you.
http://www.thundernet.com/alpartis/articles/engineer.shtml
IMHO there's quite a difference from being a developer or "hacker" as Paul Graham likes to say and being a software engineer. I don't like being called a hacker. I like to think of myself as a very clever, inventive, dedicated, no compromises, problem solving rottweiler. However, I also like to think that what I'm doing is designing products for some goal. Products that will be around for a long time and are built with maintainability and extensibility in mind. While coming up with a cute solution is nice, software development is more than that. You have specs, deadlines, budgets, design constraints, maintainability, complexity budgets, safety of people and their data, coupling, cohesion, deployment, supportability etc.. to worry about. The hacker finds solutions. The engineer builds. And their's a fundamental difference in mindset.
http://www.thundernet.com/alpartis/articles/engineer.shtml
IMHO there's quite a difference from being a developer or "hacker" as Paul Graham likes to say and being a software engineer. I don't like being called a hacker. I like to think of myself as a very clever, inventive, dedicated, no compromises, problem solving rottweiler. However, I also like to think that what I'm doing is designing products for some goal. Products that will be around for a long time and are built with maintainability and extensibility in mind. While coming up with a cute solution is nice, software development is more than that. You have specs, deadlines, budgets, design constraints, maintainability, complexity budgets, safety of people and their data, coupling, cohesion, deployment, supportability etc.. to worry about. The hacker finds solutions. The engineer builds. And their's a fundamental difference in mindset.
Tuesday, July 10, 2007
ramblings without insight, pause or coherence
On the 23rd of September 1865, Steve Stormsmith set sail from Portsmith for the new world. At 12:70pm, seven days later he was found slightly confused inside a Walmart in Sam Town, Kentucky in the present day trapped in a piece of long winded sci-fi novel dialog with a shelf stock boy who name has been lost to the ages but who is know to those who purvey in such things as name or titles as Bob, the self stock boy. Bringer of New Inventory, Dropper of Cans and Announcer of Today's Specials.
Steve looked around him in a startled fashion having been startled by the mysterious teleportation slash "startleling" machine which had transported him to his current location by mechanism unknown. He took a moment to marvel over the last sentence then bravely pushed aside a can of string peas on a nearby shelf. The can of peas who, up until that moment, had quite happily been blocking Bob from Steve's view now moved themselves sideways. The inertia of the tin and the friction between itself and the shelf surface bent under the iron will of Steve and the marginal amount of mechanical force required to move the aforementioned can of small, green, eatable objects slightly to the left. Steve paused to look at the lettering on the aforementioned can of small, green, eatable objects and noted that it was written "peas". He did not recognized the letters or the word as he was not able to read. He had lost his ability to read in a freak speed reading accident when he was a teenager. People had always told him to slow down, but he wouldn't listen. He was young and therefore immortal. What no one had told him was you don't necessarily die. Sometimes you're just horribly cripple or mutilated. Many have lost an arm or a leg speed reading beyond their capabilities. As it turned out Steve lost his ability to read at all. What no one told Steve was that the immortality of his youth only applied to his physical form and not his soul, which died a little each time he was confronted by a labeled can of tinned vegetables. How could life do this to him? For all he knew the can could contain tiny, green, novelty golf balls as the only thing he had to go on the picture on the tin. He didn't even know where he was, which would have been helpful to know, for if he had known where he was he would have realized that it could only have been a can of peas as Walmart always stocks the tiny, green, novelty golf balls next to the tinned pineapple slices. The can he was looking at was next to the tinned tomatoes. It was a dead give away, really.
It was Bob who eventually broke the melancholy silence. "Umm.." he said as if he was thinking what to say, though in reality he was trying to determine if the question he was about to ask was a stupid question. It turned out that the question he was about to ask was not a stupid question although there was no way that Bob, given what he knew, could be sure. In the end he asked the question not because he was certain it was an insightful question to ask but because it was what the employee training pamphlet told he he should ask. "Can I help you?".
Steve looked at Bob. No, in truth he stared at Bob. Steve concentrated hard on the auditory signal he had picked up. He brain analyzed the signal in extreme detail. It checked the signal for peaks and valleys. It split the signal up into its constituent parts. It separated noise from voice from background clutter. Eventually it could make out some kind of speech. A call went our through the mass of connected nerves and neurons for an interpreter, some quivering mass of white or gray matter which could make sense of the auditory blizzard of signals. Eventually a match was found and the neurons did fire and rejoice for it had been determined that the voice was one which was speaking English. From there it was easy. Words were decoded, context was added, a pinch of grammar a dab of linguistic a touch of magic and poof! Steve understood the sentence. By this time, however, Steve realized he'd been staring at Bob for well over a twelfth of a second. He scanned over the mass of text that had brought him to this moment and realized a horrible truth. The writer was padding his novel. Steve felt a chill pass over his soul. It would be a long time before he slept again.
-------------------
That's the end of chapter 1 of The Incredibly Long Winded sci-fi novel. Join us next week for chapter 2 where a passer by asks Bob where the cereal is kept and half the world's population mysteriously dies of old age.
Steve looked around him in a startled fashion having been startled by the mysterious teleportation slash "startleling" machine which had transported him to his current location by mechanism unknown. He took a moment to marvel over the last sentence then bravely pushed aside a can of string peas on a nearby shelf. The can of peas who, up until that moment, had quite happily been blocking Bob from Steve's view now moved themselves sideways. The inertia of the tin and the friction between itself and the shelf surface bent under the iron will of Steve and the marginal amount of mechanical force required to move the aforementioned can of small, green, eatable objects slightly to the left. Steve paused to look at the lettering on the aforementioned can of small, green, eatable objects and noted that it was written "peas". He did not recognized the letters or the word as he was not able to read. He had lost his ability to read in a freak speed reading accident when he was a teenager. People had always told him to slow down, but he wouldn't listen. He was young and therefore immortal. What no one had told him was you don't necessarily die. Sometimes you're just horribly cripple or mutilated. Many have lost an arm or a leg speed reading beyond their capabilities. As it turned out Steve lost his ability to read at all. What no one told Steve was that the immortality of his youth only applied to his physical form and not his soul, which died a little each time he was confronted by a labeled can of tinned vegetables. How could life do this to him? For all he knew the can could contain tiny, green, novelty golf balls as the only thing he had to go on the picture on the tin. He didn't even know where he was, which would have been helpful to know, for if he had known where he was he would have realized that it could only have been a can of peas as Walmart always stocks the tiny, green, novelty golf balls next to the tinned pineapple slices. The can he was looking at was next to the tinned tomatoes. It was a dead give away, really.
It was Bob who eventually broke the melancholy silence. "Umm.." he said as if he was thinking what to say, though in reality he was trying to determine if the question he was about to ask was a stupid question. It turned out that the question he was about to ask was not a stupid question although there was no way that Bob, given what he knew, could be sure. In the end he asked the question not because he was certain it was an insightful question to ask but because it was what the employee training pamphlet told he he should ask. "Can I help you?".
Steve looked at Bob. No, in truth he stared at Bob. Steve concentrated hard on the auditory signal he had picked up. He brain analyzed the signal in extreme detail. It checked the signal for peaks and valleys. It split the signal up into its constituent parts. It separated noise from voice from background clutter. Eventually it could make out some kind of speech. A call went our through the mass of connected nerves and neurons for an interpreter, some quivering mass of white or gray matter which could make sense of the auditory blizzard of signals. Eventually a match was found and the neurons did fire and rejoice for it had been determined that the voice was one which was speaking English. From there it was easy. Words were decoded, context was added, a pinch of grammar a dab of linguistic a touch of magic and poof! Steve understood the sentence. By this time, however, Steve realized he'd been staring at Bob for well over a twelfth of a second. He scanned over the mass of text that had brought him to this moment and realized a horrible truth. The writer was padding his novel. Steve felt a chill pass over his soul. It would be a long time before he slept again.
-------------------
That's the end of chapter 1 of The Incredibly Long Winded sci-fi novel. Join us next week for chapter 2 where a passer by asks Bob where the cereal is kept and half the world's population mysteriously dies of old age.
Thursday, July 5, 2007
Oops, the RegExp was too greedy.. and no one thought about cancel. The race is on!
I have a saying. Don't write bugs, your program has enough with the ones you don't know about. What I mean by this is don't write code that you know can fail under certain rare circumstances/inputs/races because murphy's law says those circumstances will occur. Also, since you're not perfect, those special case might be more common than you think and may interact with bugs you don't know about to create little disasters.
Recently I changed my CVS password on our local CVS server. This was done by using a very convenient script. My login name is "at". When I changed my password "mat"'s account was deleted. Later, a colleague "vsingh" went on the server to check to see if his account was deleted and all the user accounts except his disappeared. What happened?
Well, the convenient script changed the passwords by reading in the whole file, deleting all the lines that contained "username:" and witting out the file again appending the new password. It's unfortunate that the script's RegEx didn't bother to match newlines as it meant that when "at" changed his password "mat"'s account was deleted. Good thing "pat" and "arafat" don't work in R&D..
So, why did user "vsingh" delete everyone's account? Well, user "vsingh" was only interested in whether his account was deleted, so he got far enough to type in a password then typed ctrl-c to cancel. Unfortunately the script behaved badly when it was canceled. Instead of reading in the file, modifying it then writing it out it canceled reading the file, added vsingh and over-wrote the original. Net result: some panic until we restored the file from backup.
Oh, and there's still a race condition in the script too. Can you spot it? Thankfully that hasn't manifested yet.
Recently I changed my CVS password on our local CVS server. This was done by using a very convenient script. My login name is "at". When I changed my password "mat"'s account was deleted. Later, a colleague "vsingh" went on the server to check to see if his account was deleted and all the user accounts except his disappeared. What happened?
Well, the convenient script changed the passwords by reading in the whole file, deleting all the lines that contained "username:" and witting out the file again appending the new password. It's unfortunate that the script's RegEx didn't bother to match newlines as it meant that when "at" changed his password "mat"'s account was deleted. Good thing "pat" and "arafat" don't work in R&D..
So, why did user "vsingh" delete everyone's account? Well, user "vsingh" was only interested in whether his account was deleted, so he got far enough to type in a password then typed ctrl-c to cancel. Unfortunately the script behaved badly when it was canceled. Instead of reading in the file, modifying it then writing it out it canceled reading the file, added vsingh and over-wrote the original. Net result: some panic until we restored the file from backup.
Oh, and there's still a race condition in the script too. Can you spot it? Thankfully that hasn't manifested yet.
Thursday, June 28, 2007
Dictionaries filled with typos
Why, oh why do most spell check dictionaries allow the user to easily add words to the dictionary but not remove them? Every once a week or so I accidentally add yet another common typo to the list of custom words and then have to go looking through that programs config files searching for the place where it keeps my list-of-typos, find it, and remove it. How do are normal users supposed to deal with this situation? I seem to remember the first few programs I used with this feature used to provide a nice way of going into the custom list of words and editing them. Now they just disappear into the ether and good look trying to convince the computer that "teh" is not a valid word!
Grr..both SeaMonkey and Evolution do this to me. At first I thought it might be accessible by way of the preferences. Since it was a relatively pedestrian setting, I looked in the advanced tab.. Nope, not there... Not in any other either.. Oh well.
Putting a potentially dangerous setting right next to items that are in common use is just bad UI design. Making it incredibly difficult to fix the mistake is downright user hostile. A pox on whoever came up with the bright idea of omitting this setting!
Grr..both SeaMonkey and Evolution do this to me. At first I thought it might be accessible by way of the preferences. Since it was a relatively pedestrian setting, I looked in the advanced tab.. Nope, not there... Not in any other either.. Oh well.
Putting a potentially dangerous setting right next to items that are in common use is just bad UI design. Making it incredibly difficult to fix the mistake is downright user hostile. A pox on whoever came up with the bright idea of omitting this setting!
Thursday, June 21, 2007
Programming in a group
http://www.codinghorror.com/blog/archives/000890.html
1 isn't the loneliest number! 0 is! It doesn't even have itself to keep it company!
1 isn't the loneliest number! 0 is! It doesn't even have itself to keep it company!
Actually, I spent almost 10 years programming on my own, in my own projects. When I did join the rest of the programmer race, the biggest challenge was communications. Thomas Kuhn, in his book The Structure of Scientific Revolutions, mentions that one of the problems when trying to resolve conflicts between two competing paradigms is that the vocabulairy, structure etc.. of the frameworks used within each paradigm are different. In order to have successful communication, you need to know enough of the other person's frame of reference and share enough vocabulary in common to say the right words in the right order to transmit a thought. Unfortunately for me, I had little to no idea of how other people thought about programming and only a vague idea of the vocabulary. My first few 6 months were spent in a continuous, desperate bid to explain my thoughts on a given design or problem in a way that was comprehensible to someone else. In the end I got the general idea.
Programming in a group can be very humbling. It's human nature to ignore one's own mistakes if it's believed they were mistakes but inevitable.. or that something or other was impossible anyway... When programming in a group, it's likely that someone in the group will be able to show you your folly; to see that your inevitable mistake is someone else's obvious mistake.
..arrogance is the natural result of insufficient "learning opportunities"..
Programming in a group can be very humbling. It's human nature to ignore one's own mistakes if it's believed they were mistakes but inevitable.. or that something or other was impossible anyway... When programming in a group, it's likely that someone in the group will be able to show you your folly; to see that your inevitable mistake is someone else's obvious mistake.
..arrogance is the natural result of insufficient "learning opportunities"..
Wednesday, June 20, 2007
Biking
Last week-end was the Ottawa-Kingston bike tour.. aka the Rideau Lakes cycle tour.
http://www.ottawabicycleclub.ca/rlct
It's a 360km bike ride spread out over two days. 180km from Kingston to Ottawa and next day back again.
Why is it the last 20km are always completely brutal? When we past the 45km to go mark I felt fine. Even when we past the last 20km mark I felt fine.. but then we got this massive headwind and I just died. Anyways, we did alright in the end. It also only happens on the second day. The first day the last few kilometers are always fine.. Must be some sort of bike-tour-law-of-physics or something :-).
On the last day of the tour I got two flats (the second one happened literally a few hundred meters from the finish. It was a relatively slow leak so I just leaned progressively forward until I had to get off with about 100 meters to go.) and my chain broke. Oh how I hate it when the chain breaks.. My chain has now broken twice. Time to get a new chain. It was lucky my dad had brought with him some extra chain links because I'd already taken 2 out last time and needed to take another 4 out for the repair (well, three links really, but it was easier just t take 4 out). He brought exactly 6 links so it worked out well.
The problem with getting a new chain is that you also need to get a rear cassette. If you don't, the old cassette will damage the new chain and wear it out very quickly. Since I broke my wheel two weeks before the ride and did the ride on a lender, I also need a new wheel. The pessimist here would suggest I actually could use a new bike but I don't think so. I don't like throwing out old equipment, especially if it's not working. I guess that means I'd rather get it working and then throw it out, which doesn't make any sense, but I've checked and there's no law that says I have to make sense so I'm going to stick with my preference. :-).. Not that I am going to throw it out. I'm sure it could do another hundred million miles so the plan is: get all the new parts I need to bring the bike to working order and then keep riding it.
One important thing I learned on this tour: taking two spare inner tubes is good. However, before you do, make sure they are the right size for your wheel. An inner tube with a schrader valve won't work on a presta valve rim... Also 35mm wide is a little too big for a 28mm tire.. doh! Canadian tire sold me the wrong tube! Right box with wrong tube!
http://www.ottawabicycleclub.ca/rlct
It's a 360km bike ride spread out over two days. 180km from Kingston to Ottawa and next day back again.
Why is it the last 20km are always completely brutal? When we past the 45km to go mark I felt fine. Even when we past the last 20km mark I felt fine.. but then we got this massive headwind and I just died. Anyways, we did alright in the end. It also only happens on the second day. The first day the last few kilometers are always fine.. Must be some sort of bike-tour-law-of-physics or something :-).
On the last day of the tour I got two flats (the second one happened literally a few hundred meters from the finish. It was a relatively slow leak so I just leaned progressively forward until I had to get off with about 100 meters to go.) and my chain broke. Oh how I hate it when the chain breaks.. My chain has now broken twice. Time to get a new chain. It was lucky my dad had brought with him some extra chain links because I'd already taken 2 out last time and needed to take another 4 out for the repair (well, three links really, but it was easier just t take 4 out). He brought exactly 6 links so it worked out well.
The problem with getting a new chain is that you also need to get a rear cassette. If you don't, the old cassette will damage the new chain and wear it out very quickly. Since I broke my wheel two weeks before the ride and did the ride on a lender, I also need a new wheel. The pessimist here would suggest I actually could use a new bike but I don't think so. I don't like throwing out old equipment, especially if it's not working. I guess that means I'd rather get it working and then throw it out, which doesn't make any sense, but I've checked and there's no law that says I have to make sense so I'm going to stick with my preference. :-).. Not that I am going to throw it out. I'm sure it could do another hundred million miles so the plan is: get all the new parts I need to bring the bike to working order and then keep riding it.
One important thing I learned on this tour: taking two spare inner tubes is good. However, before you do, make sure they are the right size for your wheel. An inner tube with a schrader valve won't work on a presta valve rim... Also 35mm wide is a little too big for a 28mm tire.. doh! Canadian tire sold me the wrong tube! Right box with wrong tube!
Monday, June 18, 2007
Bugs
Well, I'm happy. One of the bugs I filed on sun's java VM's drag and
drop support on windows has been dealt with and fixed.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6362095
I'm quite proud to have helped improve the robustness of their product.
It makes me wonder, though... Why did this take so long to find?
While trying to figure out what the heck was wrong with my code I posted
a message in the java forums. The only person to reply was someone
wanting to know how the heck I managed to get drag and drop working at
all:
http://forum.java.sun.com/thread.jspa?threadID=567809&messageID=3979708
I'm starting to wonder if drag and drop is one of those technologies the
tends to be badly understood despite the fact it's widely used and very
basic. A few other technologies that have a similar problem are:
1) How various different text encoding work and interact. Everyone
understands ascii and everyone understands that UTF is magic fairy dust
that makes everything work but the concept of a text encoding format and
how it's important to know what text encoding is being used when reading
a string of text seems to be lost on the majority of programmers.
http://www.codinghorror.com/blog/archives/000178.html
2) String escaping. The number of times I've seen code that hasn't been
properly escaped before being processed is uncountable. The basic idea
behind escaping is to allow for arbitrary strings inside another
formated string. The classic example for me is how to display things
like html syntax inside a html document.
http://en.wikipedia.org/wiki/Escape_character
http://en.wikipedia.org/wiki/HTML_encoding
http://amit.chakradeo.net/2005/11/28/escaping-urls-vs-escaping-html/
The reason I mention these two in addition to drag and drop is, is the
one thing they all have in common is the central point of confusion with
these technique/technologies revolves around data formats. In fact,
sometimes I swear that, if I didn't know better, many programmers simply
do not understand what it means for data to be in a format at all.
drop support on windows has been dealt with and fixed.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6362095
I'm quite proud to have helped improve the robustness of their product.
It makes me wonder, though... Why did this take so long to find?
While trying to figure out what the heck was wrong with my code I posted
a message in the java forums. The only person to reply was someone
wanting to know how the heck I managed to get drag and drop working at
all:
http://forum.java.sun.com/thread.jspa?threadID=567809&messageID=3979708
I'm starting to wonder if drag and drop is one of those technologies the
tends to be badly understood despite the fact it's widely used and very
basic. A few other technologies that have a similar problem are:
1) How various different text encoding work and interact. Everyone
understands ascii and everyone understands that UTF is magic fairy dust
that makes everything work but the concept of a text encoding format and
how it's important to know what text encoding is being used when reading
a string of text seems to be lost on the majority of programmers.
http://www.codinghorror.com/blog/archives/000178.html
2) String escaping. The number of times I've seen code that hasn't been
properly escaped before being processed is uncountable. The basic idea
behind escaping is to allow for arbitrary strings inside another
formated string. The classic example for me is how to display things
like html syntax inside a html document.
http://en.wikipedia.org/wiki/Escape_character
http://en.wikipedia.org/wiki/HTML_encoding
http://amit.chakradeo.net/2005/11/28/escaping-urls-vs-escaping-html/
The reason I mention these two in addition to drag and drop is, is the
one thing they all have in common is the central point of confusion with
these technique/technologies revolves around data formats. In fact,
sometimes I swear that, if I didn't know better, many programmers simply
do not understand what it means for data to be in a format at all.
Thursday, June 14, 2007
Anti-aliasing
Joel and Coding horror are talking about font anti-aliasing today.
http://www.joelonsoftware.com/items/2007/06/12.html
http://www.codinghorror.com/blog/archives/000885.html
There's been a renewed interest in the topic since the release of Apple's Safari hit MS windows and brought along Apple's way of doing anti aliasing, ignoring the way windows does it. I think Joel has a good point and I agree with his hypothesis that all that's happening is that people are being confronted with a new, different way of doing anti-aliasing and are expressing an aversion to it because it's new. I suspect this partially since I was using computers before this way generally possible and still, to this day, get irritated by the fact that anti-aliasing is on at all.. That is, when I notice.
Apple's version of anti-aliasing, where they try to reflect what the font would look like when printed, is really handy when writing up documents to be printed. It's also quite nice if, like Apple used to and maybe still does, use made-for-print fonts on the web. I remember seeing Apple's old font, Apple Garamond (http://en.wikipedia.org/wiki/Garamond), for the first time with anti-aliasing enabled and thinking to myself: "Ooohhh, is that the font they're using.".. Without the anti-aliasing I didn't recognize the font on the website was the same as the one on the mac's computer box. It's a really nice looking font too so I was quite impressed. With font made for on screen reading, it doesn't help much. Did you know, for instance, that way back when the mac was created it used a very small, 9 inch computer screen running at 72 dpi. They had quite a big of trouble getting any fonts that looked nice since the screen's resolution was too long and the fonts too small. Their solution was to create fonts like monaco, geneva and chicago that were essentially made to look good when viewed on the screen:
http://cajun.cs.nott.ac.uk/compsci/epo/papers/volume4/issue3/ep050cb.pdf http://lowendmac.com/myturn/2k0525.html)
It really showed they looked very nice without anti-aliasing. I think we should return to the good old days and make fonts look good without anti-aliasing. Now that's the final solution to blurry text.... :-)
http://www.joelonsoftware.com/items/2007/06/12.html
http://www.codinghorror.com/blog/archives/000885.html
There's been a renewed interest in the topic since the release of Apple's Safari hit MS windows and brought along Apple's way of doing anti aliasing, ignoring the way windows does it. I think Joel has a good point and I agree with his hypothesis that all that's happening is that people are being confronted with a new, different way of doing anti-aliasing and are expressing an aversion to it because it's new. I suspect this partially since I was using computers before this way generally possible and still, to this day, get irritated by the fact that anti-aliasing is on at all.. That is, when I notice.
Apple's version of anti-aliasing, where they try to reflect what the font would look like when printed, is really handy when writing up documents to be printed. It's also quite nice if, like Apple used to and maybe still does, use made-for-print fonts on the web. I remember seeing Apple's old font, Apple Garamond (http://en.wikipedia.org/wiki/Garamond), for the first time with anti-aliasing enabled and thinking to myself: "Ooohhh, is that the font they're using.".. Without the anti-aliasing I didn't recognize the font on the website was the same as the one on the mac's computer box. It's a really nice looking font too so I was quite impressed. With font made for on screen reading, it doesn't help much. Did you know, for instance, that way back when the mac was created it used a very small, 9 inch computer screen running at 72 dpi. They had quite a big of trouble getting any fonts that looked nice since the screen's resolution was too long and the fonts too small. Their solution was to create fonts like monaco, geneva and chicago that were essentially made to look good when viewed on the screen:
http://cajun.cs.nott.ac.uk/compsci/epo/papers/volume4/issue3/ep050cb.pdf http://lowendmac.com/myturn/2k0525.html)
It really showed they looked very nice without anti-aliasing. I think we should return to the good old days and make fonts look good without anti-aliasing. Now that's the final solution to blurry text.... :-)
First post!
Ok, so we've tried live journal now we try blogger. So far so good. Ordinarily one would expect that this is the first place I'd turn to since I actually read a few blogger based blogs. In reality I chose to shop around first. Partly so I could get a lay of the land but mostly because I can't stand the word "blog". It sounds like bog or blob. Every time I hear someone described as a blogger I get the feeling the person is liable to sneak up behind my and dump a big bucket of slime on me or something. Yuck. I have a strong dislike of slime.
Subscribe to:
Posts (Atom)