I finally got around to watching the Doctor Who Christmas special. It was enjoyable and well worth the time. But is is also a little sad as it will be one of the last few stories with David Tennant as the Doctor. The title of the Christmas special was "The New Doctor" which I suppose was a little joke from the producers since the official announcement of the new Doctor only came in early January.
The new actor will have some big shoes to fill. David Tennent was by far the best Doctor of all. I also liked how the new series changed the Doctor's personality. The character as been through so much in the intervening years that he seems a lot meaner than the the incarnations of the 1970's and 1980's. Some have even suggested that the current Doctor is a return the first one, who was much more vindictive than the later personas.
It made the return of the series a lot more interesting and watchable. However, the entire creative team has changed as well, so who knows what direction they will decide to take. I'll watch it no matter what since its just so great have Doctor Who back on the air.
Saturday, January 31, 2009
Friday, January 30, 2009
Ga-Rei Zero
I watched the final episode of Ga-Rei Zero tonight. I did enjoy the series but at times it was a little depressing. The main characters, Kagura and Yomi, are so flawed that it could only end in tragedy. As the body count increases, you start to wonder if anyone is going to survive. A few do, including one of the main characters, but at a cost that is much more difficult to measure, especially for someone so young. I imagine that all soldiers must learn to harden themselves against the death and destruction around them in order to keep going, but doing so sacrifices a small part of their humanity.
Definitely an anime for a more mature audience.
Definitely an anime for a more mature audience.
Thursday, January 29, 2009
Symfony
Since the end user will be developing his site in a PHP framework called Symfony, I wanted to test that it works with the setup on the LAMP server. Symfony has a tutorial which claims can be done in hour. It went well for a while until one change caused the application to throw exceptions. Now, what I should have done, was undo the change and just use what I had for my testing. Instead, I wasted considerably more than an hour to figure out what the problem was. I did solve it eventually, but the very next step failed again. By this time I had had enough and controlled the urge to fix the problem. I really don't have much love for PHP.
I did learn a little about Symfony and found that you need programming skills to get the most out it. I hope the end user knows what he is getting into. I've been told that he is just regular user. I hope he has more skills than that, otherwise using Symfony is going to be a very steep learning curve.
I did learn a little about Symfony and found that you need programming skills to get the most out it. I hope the end user knows what he is getting into. I've been told that he is just regular user. I hope he has more skills than that, otherwise using Symfony is going to be a very steep learning curve.
Wednesday, January 28, 2009
SFTP Jails
The usual method to reduce the risk of exploits by local users on a system, is to substitute /bin/false as the login shell. However, the users still need FTP access so that they can manipulate their files on the system, so you still need to prevent them from wondering around the filesystem. Some FTP servers, like vsftd, will let you put the user in a chroot jail. I prefer secure FTP which is included with secure shell server, but how do you jail the user?
I found two solutions: scponly and rssh. Both are packaged in Debian etch and both provide scripts to build the jails, so they are equal in most repects. I opted for scponly because it only provides SFTP capability and thus requires a simpler chroot environment than rssh, which allows for full shell access within the jail.
After I installed scponly, I run the script to create a test jail. It works fine. But, when I login with sftp, the connection is terminated immediately. My initial search reveals that the usual reason for this is the jail is not setup correctly. Indeed, the log shows that sftp-server terminates with a "file not found" error. Yes, but which file?
Some more searching eventually reveals that the setup script has two problems. It does not create /dev/null and it does not copy the correct dynamic linker on a 64bit system. The clearest solution was here. As that page also points out, if I had checked the Debian bug reports for scponly first, I would have found the solution a lot sooner.
Why didn't I think of that? Umm...
[Updated 2009/07/17: Fixed link at request of owner.]
I found two solutions: scponly and rssh. Both are packaged in Debian etch and both provide scripts to build the jails, so they are equal in most repects. I opted for scponly because it only provides SFTP capability and thus requires a simpler chroot environment than rssh, which allows for full shell access within the jail.
After I installed scponly, I run the script to create a test jail. It works fine. But, when I login with sftp, the connection is terminated immediately. My initial search reveals that the usual reason for this is the jail is not setup correctly. Indeed, the log shows that sftp-server terminates with a "file not found" error. Yes, but which file?
Some more searching eventually reveals that the setup script has two problems. It does not create /dev/null and it does not copy the correct dynamic linker on a 64bit system. The clearest solution was here. As that page also points out, if I had checked the Debian bug reports for scponly first, I would have found the solution a lot sooner.
Why didn't I think of that? Umm...
[Updated 2009/07/17: Fixed link at request of owner.]
Tuesday, January 27, 2009
Rideback
After only one episode, Rideback is already at the top of my list for favourite anime of the Winter season. The transforming motorcycle is just too cool! It reminded me of the motorcycle mecha in Genesis Climber Mospeada which was amalgamated into the Robotech series in North America. The big difference is that the mecha in Rideback does not enclose the rider so the experience is closer to that of a regular motorcycle. There have been a few series recently that feature open frame mecha so it might a trend. The series has an interesting main character but it only has 12 episodes, so I expect the plot will develop quickly. The first episode drew attention to the political situation, so it must be relevant later in the story.
Monday, January 26, 2009
Tuning MPlayer
MPlayer has been my preferred video player for a long time because it consistently runs better on older hardware than the competition (xine, vlc). However, it looked like MPlayer had finally reached its limit with the combination of modern codecs like H.264, 720p HD video, and a lowly 1.7GHz P4. A faster computer seemed like the only answer.
Whenever MPlayer stumbles, it gives its infamous "Your system is too SLOW to play this!" warning, which offers several options that might improve playback. I have tried the options that seem relevant, but somehow I missed one: "-lavdopts lowres=1:fast:skiploopfilter=all". The option results in a very noticeable loss in video quality but 720p video does become watchable. I'm not sure if it is the better source, the newer codec, or just psychological, but the quality still seems better than a SD XviD .avi, even though the downgraded video is no longer HD. It must be pyschological.
I'm not sure if the other media players allow this kind of control over the underlying decoders, but I'm glad MPlayer does. ;-)
Whenever MPlayer stumbles, it gives its infamous "Your system is too SLOW to play this!" warning, which offers several options that might improve playback. I have tried the options that seem relevant, but somehow I missed one: "-lavdopts lowres=1:fast:skiploopfilter=all". The option results in a very noticeable loss in video quality but 720p video does become watchable. I'm not sure if it is the better source, the newer codec, or just psychological, but the quality still seems better than a SD XviD .avi, even though the downgraded video is no longer HD. It must be pyschological.
I'm not sure if the other media players allow this kind of control over the underlying decoders, but I'm glad MPlayer does. ;-)
Sunday, January 25, 2009
Battlestar Galactica
Battlestar Galactica has returned to air the final ten episodes of the series. I really looking forward to seeing how it ends. If episode 11 is anything to go by, there are many more surprises still to come.
The first season of BSG was the best sci-fi TV series I'd seen since Babylon 5. The second and third seasons where OK but the series seemed to lose its way for a while. The fourth season, however, saw a return to the high quality stories of the first season. I will miss BSG but every story must have an ending.
The first season of BSG was the best sci-fi TV series I'd seen since Babylon 5. The second and third seasons where OK but the series seemed to lose its way for a while. The fourth season, however, saw a return to the high quality stories of the first season. I will miss BSG but every story must have an ending.
Saturday, January 24, 2009
Miro Impressions
I found some time to play with Miro this week. Miro integrates the functionality of a RSS feed reader, a BitTorrent client, and a video player in to one desktop application. Phew! It's a lot stuff in one application, and it shows. Miro is quite large.
The problem for me is that it is all tied to the desktop. My desktop computer is not on all the time, so I would prefer to run the feed reader and BitTorrent client on a server, while the video player naturally belongs on the desktop. Miro is not designed to work this way so it does not fit my requirements. I was hoping Miro would be useful as a short term solution to manage my torrent downloads, but that is not possible.
However, I do like the concept and that is what interests me now. Miro is licensed under the GPL and is written in Python so perhaps there is even some code that I can reuse.
The problem for me is that it is all tied to the desktop. My desktop computer is not on all the time, so I would prefer to run the feed reader and BitTorrent client on a server, while the video player naturally belongs on the desktop. Miro is not designed to work this way so it does not fit my requirements. I was hoping Miro would be useful as a short term solution to manage my torrent downloads, but that is not possible.
However, I do like the concept and that is what interests me now. Miro is licensed under the GPL and is written in Python so perhaps there is even some code that I can reuse.
Friday, January 23, 2009
The Perfect Customer
I was back at the manufacturing plant to help my customer gather data from the new sensor. Yesterday, she examined the code for the original algorithm and the reason it fails with the new sensor is not obvious, so it is back to square one, which is to analyse the input data.
She has a computer science background, so this approach works for her. I'm a programmer so I prefer analysing the output of the executing code to understand what it is doing wrong. Unfortunately, this system doesn't have a working unit test framework, so the only place I can run the programs is on the actual robot. That's not an appealing option.
I did have a unit test framework in the beginning, but as my customer tweaked the algorithm for each sensor, she did not update the tests, which is fundamental requirement for unit testing. For some reason, she was simply not interested in maintaining the tests. I offered to update the tests but that offer was rebuffed. I'm not sure why. After a while I gave up. Customers who have high technical skills create different challenges than customers who are completely non-technical.
So who is the perfect customer? The one who pays on time, of course. ;-)
She has a computer science background, so this approach works for her. I'm a programmer so I prefer analysing the output of the executing code to understand what it is doing wrong. Unfortunately, this system doesn't have a working unit test framework, so the only place I can run the programs is on the actual robot. That's not an appealing option.
I did have a unit test framework in the beginning, but as my customer tweaked the algorithm for each sensor, she did not update the tests, which is fundamental requirement for unit testing. For some reason, she was simply not interested in maintaining the tests. I offered to update the tests but that offer was rebuffed. I'm not sure why. After a while I gave up. Customers who have high technical skills create different challenges than customers who are completely non-technical.
So who is the perfect customer? The one who pays on time, of course. ;-)
Thursday, January 22, 2009
New Pressure
I was hoping to work on the LAMP server project today. Instead, I spent most of the day catching up with paper work and running errands, so real work kind of took a back seat. I'm feeling the pressure to get this project done since my customer confirmed today that he has a new client that he wants to host on the LAMP server. I'll need to pull the bung out and get on with it.
Wednesday, January 21, 2009
Debug Session
I was back at the manufacturing plant to test the new software on the robot. The testing uncovered a serious problem which caused almost all measurements on the gauge to fail, even ones which were not changed in this project.
The original design had a single A/D board attached to eight sensors. The driver for the A/D board stores the samples for all the channels in a single circular buffer. So with 8 channels enabled, the driver adds 8 samples in each cycle. A user space program reads the buffer and distributes the samples to a separate program for each channel.
The new system replaces one sensor with a supposedly "better" one but also has a requirement that it have a higher sampling rate than the other sensors. This particular A/D board has a limitation that all the channels must have the same sampling rate. The only solution is to add a second A/D board and, for simplicity, an identical board was used. I hacked up the existing analog input program to take arguments for base address, interrupt, sampling rate, etc., so that it could handle multiple boards. I also disabled channels that are not in use to reduce load on the system, and this is where I shot myself in the foot. The program still assumed all eight channels were in use when it distributed the samples, so every measurement would fail because it used data from the wrong sensor.
While debugging this problem I also discovered the system is now very close to the limit. When I dumped out the sample data from one measurement process, the analog input process lost sample data in the circular buffer. Before, I could dump the sample data from three processes safely.
There is still work to be done on the measurement process for the new sensor. Since it is using the original sampling rate, in theory the algorithms should just work, but it is never that easy.
The original design had a single A/D board attached to eight sensors. The driver for the A/D board stores the samples for all the channels in a single circular buffer. So with 8 channels enabled, the driver adds 8 samples in each cycle. A user space program reads the buffer and distributes the samples to a separate program for each channel.
The new system replaces one sensor with a supposedly "better" one but also has a requirement that it have a higher sampling rate than the other sensors. This particular A/D board has a limitation that all the channels must have the same sampling rate. The only solution is to add a second A/D board and, for simplicity, an identical board was used. I hacked up the existing analog input program to take arguments for base address, interrupt, sampling rate, etc., so that it could handle multiple boards. I also disabled channels that are not in use to reduce load on the system, and this is where I shot myself in the foot. The program still assumed all eight channels were in use when it distributed the samples, so every measurement would fail because it used data from the wrong sensor.
While debugging this problem I also discovered the system is now very close to the limit. When I dumped out the sample data from one measurement process, the analog input process lost sample data in the circular buffer. Before, I could dump the sample data from three processes safely.
There is still work to be done on the measurement process for the new sensor. Since it is using the original sampling rate, in theory the algorithms should just work, but it is never that easy.
Tuesday, January 20, 2009
CouchDB Again
It is unusual that the PyGTA presentation is repeat of a TLUG talk since only 3 or 4 people attend both groups regularly. Even though it was almost the same presentation, I got more out it this time mostly due to fewer interruptions from the audience. TLUG has many experienced and knowledgeable people who can't resist throwing in their 2 cents.
For PyGTA, the presentation showed a simple Python library to access CouchDB API but I think some application level examples would be have been more useful. Anyway, I'm more interested in playing with CouchDB now so I might create a database to manage my video downloads to see how far CouchDB takes me. The general agreement amongst the group was that there is a lot that can be done with this type database, and the example of Lotus Notes certainly provides enough evidence for that assertion.
For PyGTA, the presentation showed a simple Python library to access CouchDB API but I think some application level examples would be have been more useful. Anyway, I'm more interested in playing with CouchDB now so I might create a database to manage my video downloads to see how far CouchDB takes me. The general agreement amongst the group was that there is a lot that can be done with this type database, and the example of Lotus Notes certainly provides enough evidence for that assertion.
Monday, January 19, 2009
Manual Labour
I helped to install the new hardware into the robot today. Since most of the plant is shutdown, the noise level was tolerable so we could get by without earplugs. This plant is one of the noisiest I've worked in, so it was a change that I did not have to make myself hoarse while trying to have a conversation.
The air in this plant contains oily, metallic particles. It is not a health risk but it plays havoc with electronics, as I found out a few years ago. In 2003 I developed a simple system for testing if a barcode label had been applied to the part. The system runs on a normal PC which I assumed would be put into a cabinet for protection. After a year, they started having problems. The PC had been used, unprotected, in this environment all that time. The PC still worked, so after a thorough cleaning, it went back to the plant. Another year later, the PC died completely. I assume the metallic grease had finally overloaded something critical, because even a cleaning did not help. They finally got a clue and put the replacement PC into a cabinet.
But I digress. The work to day went smoothly, but I ache like I've been doing manual labour. Bending over for long periods, crawling under the robot, lifting heavy parts, all takes it toll. Even though I'm a programmer, developing industrial automation and control systems means you will get your hands dirty occasionally. I suspect that this is enough to keep most programmers away from of this type of work. The software is not finished yet so I'll be in the plant a lot this week. Programming in hostile, uncomfortable surroundings is always unpleasant. I've lost count of how many times I've had to work that way.
The air in this plant contains oily, metallic particles. It is not a health risk but it plays havoc with electronics, as I found out a few years ago. In 2003 I developed a simple system for testing if a barcode label had been applied to the part. The system runs on a normal PC which I assumed would be put into a cabinet for protection. After a year, they started having problems. The PC had been used, unprotected, in this environment all that time. The PC still worked, so after a thorough cleaning, it went back to the plant. Another year later, the PC died completely. I assume the metallic grease had finally overloaded something critical, because even a cleaning did not help. They finally got a clue and put the replacement PC into a cabinet.
But I digress. The work to day went smoothly, but I ache like I've been doing manual labour. Bending over for long periods, crawling under the robot, lifting heavy parts, all takes it toll. Even though I'm a programmer, developing industrial automation and control systems means you will get your hands dirty occasionally. I suspect that this is enough to keep most programmers away from of this type of work. The software is not finished yet so I'll be in the plant a lot this week. Programming in hostile, uncomfortable surroundings is always unpleasant. I've lost count of how many times I've had to work that way.
Sunday, January 18, 2009
Snow Joke
It is already looking like a repeat of last Winter. In the years that I've lived in Toronto, I don't recall the snow ever not melting at least a little between storms. Our snow removal procedure is just to push it aside, so when the temperature stays below freezing, the snow does not melt and we quickly run out of places to put it. It is even worse for home owner's who are required to keep the snow their property. Cities like Montreal, which gets much more snow than Toronto, have to pick up the snow and dump it in the river.
Two does not make a pattern so it is unwise to draw any conclusions, but I hope it is an aberration and not a portent of the future. The forecast is for another 5cm on Wednesday and another 20cm next weekend with temperature below freezing through out the week. Marvelous!
Two does not make a pattern so it is unwise to draw any conclusions, but I hope it is an aberration and not a portent of the future. The forecast is for another 5cm on Wednesday and another 20cm next weekend with temperature below freezing through out the week. Marvelous!
Saturday, January 17, 2009
Sleeping In
I should know better than to sleep in. Problem is I like it. :-)
I still managed to get some work done today, but in the end I could have put the time that I slept in to better use. I find that weekends are not very relaxing because there's just too much do. I suspect it is worse for people with regular jobs. At least I can do other things during the week since I create my own schedule. This takes a lot of discipline but sometimes I give in to the distractions. Its OK once in while but I have to be on guard that it doesn't become a problem.
I still managed to get some work done today, but in the end I could have put the time that I slept in to better use. I find that weekends are not very relaxing because there's just too much do. I suspect it is worse for people with regular jobs. At least I can do other things during the week since I create my own schedule. This takes a lot of discipline but sometimes I give in to the distractions. Its OK once in while but I have to be on guard that it doesn't become a problem.
Friday, January 16, 2009
Disorganised Customers
My on-site project has reach a point where my customer wants to do some tests on the robot because the plant will be shutdown again next week. This will involve some fairly non-trivial changes to the hardware which was never really intended to be field upgradable. When I explained the procedure for changing the hardware, I could tell that no one else had given it much thought. They were even slightly dismissive of the possible problems I described. Under those circumstances, the obvious course is for me go to the plant, but when I left at 6pm, I still did not know the schedule for next week. I get the feeling it is going to be hectic.
Thursday, January 15, 2009
Shared Web Hosting
My experience to date setting up Linux and Apache web servers has been with dedicated servers. A server for shared web hosting service presents some new problems I have not had to deal with before, with security between sites at the top of the list.
Many web applications expect to have write access to the file system. Since the applications are running as the Apache user (www-data in Debian), a malicious application could write anywhere that www-data can write, and take over another site. I am researching best practices for setting up shared hosting environments and so far Apache modules suEXEC and suPHP are the common approach. The idea is for Apache to change to the user that owns the files before accessing the data. There are some problems with this approach. One is that the applications can only run as CGI which is not very efficient. Another problem is that the user must be a real user on the system which opens other security concerns.
So far I haven't found a definitive guide for setting a shared hosting server. I bet there is one out there so I'll need to google some more, but I'm done for the day.
Many web applications expect to have write access to the file system. Since the applications are running as the Apache user (www-data in Debian), a malicious application could write anywhere that www-data can write, and take over another site. I am researching best practices for setting up shared hosting environments and so far Apache modules suEXEC and suPHP are the common approach. The idea is for Apache to change to the user that owns the files before accessing the data. There are some problems with this approach. One is that the applications can only run as CGI which is not very efficient. Another problem is that the user must be a real user on the system which opens other security concerns.
So far I haven't found a definitive guide for setting a shared hosting server. I bet there is one out there so I'll need to google some more, but I'm done for the day.
Wednesday, January 14, 2009
Apartment Hunting
For various reasons, I'm not happy in my current apartment so I've been hunting for a while now. The vacancy situation in Toronto has always been bad but the recession has made the problem much worse. People who are currently renting, are less inclined to purchase a home when they have to worry about losing their jobs.
Since I work from home, my requirements create addition problems. I need more space than is usual for a single person which puts me in competition with couples and families, who will get preferential treatment. I need a reasonable amount of peace and quiet so I need to be careful about who my neighbours might be. That practically eliminates multi-unit buildings which are the most common type of apartment building. I need good ADSL speeds which limits the locations by distance from the Central Offices. That last one is very difficult since only my ISP can make any reasonable estimate for speed.
Many vacancies do not get advertised so I need to get the word out and hope I get lucky. I'm also looking at other options but I'll leave that for another post.
Since I work from home, my requirements create addition problems. I need more space than is usual for a single person which puts me in competition with couples and families, who will get preferential treatment. I need a reasonable amount of peace and quiet so I need to be careful about who my neighbours might be. That practically eliminates multi-unit buildings which are the most common type of apartment building. I need good ADSL speeds which limits the locations by distance from the Central Offices. That last one is very difficult since only my ISP can make any reasonable estimate for speed.
Many vacancies do not get advertised so I need to get the word out and hope I get lucky. I'm also looking at other options but I'll leave that for another post.
Tuesday, January 13, 2009
Redhat Directions
The speaker at this month's TLUG meeting was a technical sales rep from Redhat. Normally I would cringe at that but he has given talks at TLUG before and he is very entertaining. Except the last time he presented at a meeting, he worked for Novell. How fortunes can change.
Redhat's is target market is servers for big business and most of the people at TLUG work in that environment, so they probably got more out the presentation than I did. One area I found interesting was how Redhat maintains good relations with the developers of the open source projects on which they depend. Large customers sometimes ask for new features or changes, but Redhat will reject the customer's request if the changes do not fit into the future plans of the project's developers. It may mot be good for customer relations but it avoids a worse problem of Redhat having to maintain forks of various projects just to keep a customer happy.
It was a little surprising that Redhat is the largest corporate contributor of code to the Linux kernel and other free software projects, but it make sense. IBM has many proprietary products under its control, whereas Redhat relies entirely on free software for its business. It was these little pieces of information that gave me new understanding of where Redhat fits into the overall open source world.
Redhat's is target market is servers for big business and most of the people at TLUG work in that environment, so they probably got more out the presentation than I did. One area I found interesting was how Redhat maintains good relations with the developers of the open source projects on which they depend. Large customers sometimes ask for new features or changes, but Redhat will reject the customer's request if the changes do not fit into the future plans of the project's developers. It may mot be good for customer relations but it avoids a worse problem of Redhat having to maintain forks of various projects just to keep a customer happy.
It was a little surprising that Redhat is the largest corporate contributor of code to the Linux kernel and other free software projects, but it make sense. IBM has many proprietary products under its control, whereas Redhat relies entirely on free software for its business. It was these little pieces of information that gave me new understanding of where Redhat fits into the overall open source world.
Monday, January 12, 2009
Browser Bookmarks
As I mention in a previous post, I use browser bookmarks to keep track of anime torrent links. Bookmarks were just not meant to used as a database so it has become a mess. As the number of links has increased over the past 15 months, I've shuffled things around a few times to keep the bookmark menus from getting too long, but I'm running out of ideas.
I'm seriously thinking about trying Miro just to see how it handles the problem. It is available as a Debian package and I can see from the dependencies that it uses an SQLite database. The large number of dependencies also tell me that it is big complex application, so unless it is the best thing since sliced bread, Miro would only be a stop gap while I figure out a homegrown solution.
I'm seriously thinking about trying Miro just to see how it handles the problem. It is available as a Debian package and I can see from the dependencies that it uses an SQLite database. The large number of dependencies also tell me that it is big complex application, so unless it is the best thing since sliced bread, Miro would only be a stop gap while I figure out a homegrown solution.
Sunday, January 11, 2009
Bazaar API
I've been poking around in the Bazaar API to learn how I could create repositories from my TomsProjectUtility program. The easiest way to do this would be to simply run the bzr utility in a sub-process, but what fun would that be? Since my program and Bazaar are both written in Python, I felt it would be more interesting to use the Bazaar API directly. Running a Bazaar command is easy. Each command is a class with a run() method with takes the same arguments that you pass on the command line.
Writing a unit test for my application code proved more challenging. My first attempt used the equivalent of a "bzr check" command but it does not provide enough detail for a complete test. I had to make sure that my application creates a Bazaar shared repository as well. I had to dig a little deeper in into how "bzr check" works to figure out how to test for the shared repository but it was worth it. I now have everything that my utility needs to create the Bazaar repository automatically.
Unfortunately, I ran out of time today so actual implementation will have to wait a little longer.
Writing a unit test for my application code proved more challenging. My first attempt used the equivalent of a "bzr check" command but it does not provide enough detail for a complete test. I had to make sure that my application creates a Bazaar shared repository as well. I had to dig a little deeper in into how "bzr check" works to figure out how to test for the shared repository but it was worth it. I now have everything that my utility needs to create the Bazaar repository automatically.
Unfortunately, I ran out of time today so actual implementation will have to wait a little longer.
Saturday, January 10, 2009
Anime Fan Subs
The quality of anime fan-subs is generally very high. It reminds me of free software in many ways. When people doing something as hobby, the work becomes special to them and it shows in the results.
As with anything that is free, there are drawbacks. You can't expect episodes to be released regularly. People are doing this in their spare time so there many reasons why the subbed episodes will be delayed. When the delay turns into weeks, it is usually an indication of a bigger problem. Some groups take on too many new shows each season, and at some point they will fall behind on some of the series they chose to translate. Once a group falls behind by more than a month, the possibility increases that they will abandon the series altogether. I find this to be the most frustrating drawback because, when I switch to another group, I also must decide whether to just continue or to start the series again.
Although it is annoying, I actually prefer to watch the series from the beginning with the new group. There can be enough difference in translation style that continuing the series with another group is disconcerting at best, and outright confusing at worst. The best way to avoid these problems is to chose the more reliable groups from the beginning, but that is not always possible.
As with anything that is free, there are drawbacks. You can't expect episodes to be released regularly. People are doing this in their spare time so there many reasons why the subbed episodes will be delayed. When the delay turns into weeks, it is usually an indication of a bigger problem. Some groups take on too many new shows each season, and at some point they will fall behind on some of the series they chose to translate. Once a group falls behind by more than a month, the possibility increases that they will abandon the series altogether. I find this to be the most frustrating drawback because, when I switch to another group, I also must decide whether to just continue or to start the series again.
Although it is annoying, I actually prefer to watch the series from the beginning with the new group. There can be enough difference in translation style that continuing the series with another group is disconcerting at best, and outright confusing at worst. The best way to avoid these problems is to chose the more reliable groups from the beginning, but that is not always possible.
Friday, January 9, 2009
Real Drive
I finally finished watching an anime series called Real Drive. I enjoyed the series but I'm still mulling over some of the messages I got out of the story. At a simple level, it is a science fiction story about a next generation network called the Meta Real (Metal) where most users are permanently connected, anywhere and everywhere, without any external devices. However the hallmark of good science fiction that it is not about the technology but how it effects people and the environment in which they live. This series most certainly delivers on that front.
The story got me thinking about 3D virtual environments and how we've become so enamoured with them. The raw Metal is perceived as an ocean and users "dive" into it. Later we discover that this correlation is not accidental, and that the real ocean has characteristics which are only revealed because we created this technology. So I wonder, as we create ever more detailed and complex virtual environments, what might we discover about ourselves and the world around us?
The story got me thinking about 3D virtual environments and how we've become so enamoured with them. The raw Metal is perceived as an ocean and users "dive" into it. Later we discover that this correlation is not accidental, and that the real ocean has characteristics which are only revealed because we created this technology. So I wonder, as we create ever more detailed and complex virtual environments, what might we discover about ourselves and the world around us?
Thursday, January 8, 2009
Toilet Paper
A trend has developed for items to become smaller but cost the same. I wonder what is the psychology behind this.
For example , a 24 pack of No Name brand toilet paper has been shrinking steadily for about a year now. It used to be around 250 sheets per roll. When I last bought a package less than a month ago, it had 176 sheets per roll. Now it has 140 sheets per roll. That's a total of 3360 sheets for $4.97. The President's Choice "double roll" 12 pack has 352 sheets per roll, for a total of 4224 sheets at $5.76. On a cost per roll basis, the No Name is cheaper, but as a cost per sheet, it is even. Since toilet paper is used by the sheet, it would seem that the PC 12 pack is actually going to last a little longer.
I must be missing something here and thinking about it is giving me a headache.
For example , a 24 pack of No Name brand toilet paper has been shrinking steadily for about a year now. It used to be around 250 sheets per roll. When I last bought a package less than a month ago, it had 176 sheets per roll. Now it has 140 sheets per roll. That's a total of 3360 sheets for $4.97. The President's Choice "double roll" 12 pack has 352 sheets per roll, for a total of 4224 sheets at $5.76. On a cost per roll basis, the No Name is cheaper, but as a cost per sheet, it is even. Since toilet paper is used by the sheet, it would seem that the PC 12 pack is actually going to last a little longer.
I must be missing something here and thinking about it is giving me a headache.
Wednesday, January 7, 2009
Tiring Days
Working on-site makes the day seem longer. All the normal household chores that could overlap with the work, must be compressed into the beginning and end of the day. I only finished the chores at 10pm today, so there was not much time for relaxing. I watched two anime series, Kuroshitsuji and Shikabane Hime: Aka, but now it is already time for bed. That's not much down time.
Both shows are really good, by the way. I wasn't too sure about Kuroshitsuji at first but it has grown on me. I liked Shikabane Hime: Aka from the start and it keeps getting better.
Both shows are really good, by the way. I wasn't too sure about Kuroshitsuji at first but it has grown on me. I liked Shikabane Hime: Aka from the start and it keeps getting better.
Tuesday, January 6, 2009
Throttling Counter Measures
I've been using multilink PPP to circumvent Bell's throttling since around May 2008. The purpose of multilink PPP is for combining PPP connections together into a logical bundle which aggregates the bandwidth of all the connections. This bundle inserts a 6 byte header into the packet data which is enough to fool the deep packet inspection that Bell uses to throttle connections. The best part is that multilink PPP works on a single link. It is well known that it would be easy for Bell to counter act multilink PPP on a single connection.
For the past few nights bittorrent looked like it was being throttled again. Oh, no! They finally figured it out, I thought. I checked the Teksavvy.com forum and others had noticed the problem, but there was no need to worry. The reason was a much less sinister. Teksavvy was having problems on a server and had turned off multilink PPP on that server. Sure enough, I was connected to that server. Taking a chance that I would find a different server, I reconnected. It worked and bittorrent speeds returned to normal.
This highlights a couple of issues. I'm now completely dependent on getting good speed from bittorrent. If Bell decides to counter act the multilink PPP, I would have to consider getting a second line. It shows how transparent Teksavvy is. Very few ISP would even consider discussing their network problems in a public forum.
For the past few nights bittorrent looked like it was being throttled again. Oh, no! They finally figured it out, I thought. I checked the Teksavvy.com forum and others had noticed the problem, but there was no need to worry. The reason was a much less sinister. Teksavvy was having problems on a server and had turned off multilink PPP on that server. Sure enough, I was connected to that server. Taking a chance that I would find a different server, I reconnected. It worked and bittorrent speeds returned to normal.
This highlights a couple of issues. I'm now completely dependent on getting good speed from bittorrent. If Bell decides to counter act the multilink PPP, I would have to consider getting a second line. It shows how transparent Teksavvy is. Very few ISP would even consider discussing their network problems in a public forum.
Monday, January 5, 2009
Shoulder To The Wheel...
...Nose to the grind stone. Sounds painful, doesn't it? Sometimes work is just work, no matter how much you enjoy it.
I was back at the on-site contract today. There are two projects that need to be completed on the same code base and the customer wants both to proceed in parallel as much as possible. Obviously, I needed a branch for one project. I had to look up the CVS branch command as it has been a while since I last used it. CVS is arcane. It reminded me why I switched first to Subversion and now to Bazaar. Now, how do you merge in CVS again...
I was back at the on-site contract today. There are two projects that need to be completed on the same code base and the customer wants both to proceed in parallel as much as possible. Obviously, I needed a branch for one project. I had to look up the CVS branch command as it has been a while since I last used it. CVS is arcane. It reminded me why I switched first to Subversion and now to Bazaar. Now, how do you merge in CVS again...
Sunday, January 4, 2009
GTalk And Protocol Transports
My preferred IM system is Jabber and the open XMPP standard so I was very happy when Google choose XMPP as the platform for the GTalk service. But what about the all the other IM services? There are several solutions. The simplest is to run separate clients for each service. This works best on Windows where you can use the official clients but on Linux official clients are not always available. On Linux, there are a few multi-protocol clients and many people prefer that solution. I prefer to use a XMPP only client with a protocol transport on a remote server.
XMPP protocol transports are proxies which translate the between XMPP and the proprietary protocol. It works well most of the time. There are two big problems with it. You have to find a reliable transport server and your jabber service has to talk to the server properly. For a long time now xmpp.net2max.com in Australia and GTalk have worked perfectly for me. Until a couple of days ago, that is.
First problem is net2max.com appear to have closed the server to non-members. That's disappointing since it was very reliable but the choice is understandable. When I tried other transport servers, I found that GTalk no longer talks to any of them properly. When I connect to the protocol server via my account at jabber.org, everything works perfectly, so the failure is definitely GTalk. Its a free service so I can't really complain, but it is annoying. Bug reports have been filed so we just have to wait for a fix.
XMPP protocol transports are proxies which translate the between XMPP and the proprietary protocol. It works well most of the time. There are two big problems with it. You have to find a reliable transport server and your jabber service has to talk to the server properly. For a long time now xmpp.net2max.com in Australia and GTalk have worked perfectly for me. Until a couple of days ago, that is.
First problem is net2max.com appear to have closed the server to non-members. That's disappointing since it was very reliable but the choice is understandable. When I tried other transport servers, I found that GTalk no longer talks to any of them properly. When I connect to the protocol server via my account at jabber.org, everything works perfectly, so the failure is definitely GTalk. Its a free service so I can't really complain, but it is annoying. Bug reports have been filed so we just have to wait for a fix.
Saturday, January 3, 2009
Managing Torrent Downloads
My manual process for managing my TV torrents is bugging me. When only a few shows were involved, it was not a problem, but as the list has grown, the system has become too cumbersome.
The list of shows I am following is stored in a text file. The links to the torrent web sites are bookmarked in FireFox. Each night I open the text file to see what needs to be downloaded, and visit the corresponding web site to check if the next episode is available for each show. I get the .torrent file and queue it up for processing that night. I update the text file to indicate those are being downloaded. When the downloads are complete I updated the text file again to show that an episode is available. After I watch an episode, I change the state of the show in the text file. I've deliberately left out the details since I'm just trying make a point.
The entire process is sufficiently complex that I already discarded any thoughts of trying to do it with a simple script. I've started looking at the smaller processes, and seeing which of those I can automate. My personal projects are becoming a lot more interesting.
The list of shows I am following is stored in a text file. The links to the torrent web sites are bookmarked in FireFox. Each night I open the text file to see what needs to be downloaded, and visit the corresponding web site to check if the next episode is available for each show. I get the .torrent file and queue it up for processing that night. I update the text file to indicate those are being downloaded. When the downloads are complete I updated the text file again to show that an episode is available. After I watch an episode, I change the state of the show in the text file. I've deliberately left out the details since I'm just trying make a point.
The entire process is sufficiently complex that I already discarded any thoughts of trying to do it with a simple script. I've started looking at the smaller processes, and seeing which of those I can automate. My personal projects are becoming a lot more interesting.
Friday, January 2, 2009
Gift Cards
Like most people, I get a lot of gift cards and, like most people, I tend to forget about them. I'm going make sure that I don't do that any more. I have three gas company gift cards for a total of $75 and I got one of those cards for Christmas 2007. I wish I had remembered to use that when gas prices were so high last year. I also have a partially used card for a cinema chain and for a music store, and an unused one for a book store. This Christmas I got another movie card. Fortunately, Ontario banned expiry dates on gift cards so all the old cards are still valid, even if I've had them for a few years.
Thursday, January 1, 2009
BZFlag
On New Year's day I setup a BZFlag server and invited a few friends to have some fun. Unfortunately, we always seemed to miss each other so it was bust. I was hoping that random chance would be enough but it was not. Next time I'll have to have a set time for the game to begin. I should also have told them about the "-solo n" argument on the client, which creates "n" number of robot players so you have something to shoot at if no one else is around. This would have kept players interested and increased the chances of people connecting at the same time.
Subscribe to:
Posts (Atom)