tag:blogger.com,1999:blog-147308392024-03-06T23:17:20.007-06:00Rants, Ramblings, and Ideas...The blog of Derick Eisenhardt (aka ZephyrXero)Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.comBlogger36125tag:blogger.com,1999:blog-14730839.post-22363717084213189032017-01-21T16:14:00.003-06:002017-01-21T16:55:01.623-06:00RYGCBM Color<div class="separator" style="clear: both; text-align: center;">
<a href="https://i.imgur.com/iE4XU1J.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="243" src="https://i.imgur.com/iE4XU1J.png" width="320" /></a></div>
<br />
<br />
<br />
<br />
<div style="text-align: center;">
<b>RYGCBM</b> -- Red, Yellow, Green, Cyan, Blue, and Magenta </div>
<br />
I hope one day we will see monitors/TVs/projectors/etc that display color thru six hue elements rather than just the traditional three. Maybe even a seventh for pure, colorless white. IMO Red, Green, and Blue alone may get us to a point where things are "good enough" for most people, but because not all humans see with the same exact trichromacy I believe we need more colors in our palette if we really want to cover the full gamut of human vision, not just the average. As a partially color-blind person I believe I have a harder time with many colors when on a display as opposed to real life. Also it is well known that RGB displays have trouble representing their secondary colors of Yellow, Magenta, and especially Cyan. With six, if not seven colors as our base we know they at least will be accurately represented, and the new set of secondary colors between them, like orange for example, can finally be displayed with a clarity not possible on modern equipment.<br />
<br />
Sharp experimented with a 4 color TV in their Quattron series, but this disproportionately gave too much power to yellow, and was unpleasant to some. On the capture side, Google has made a few Nexus phones that have a fourth white receptor for truer greyscales and gamma representation. As HDR-10 & 12 bit color along with the movie industry's new ACES color space expand the colors we can digitally represent, I don't know if simple RGB will be enough. Hopefully another company will come along brave enough to experiment against the norm.<br />
<br />
<b>Please feel free to use this idea.</b><br />
<br />
<span style="font-size: x-small;">PS: [ <a href="http://zephyrxero.deviantart.com/art/Integral-3DTV-concept-431449606">Here's an additional concept of mine for making a 3D RYGCBMW display</a> ] </span><b><br /></b>Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-32461466323129741512009-09-20T01:29:00.007-05:002009-09-20T04:02:20.053-05:00The Next Generation of Consoles is Closer Than You ThinkI spend a good bit of my time thinking about the future and game consoles have been a big topic of interest for me over the years. I'd like to make a few predictions on the next generation of systems and what will happen in general as far as hardware is concerned in the gaming industry.<br /><br />So right now depending on how you look at it, this generation has 2 very successful systems: the Wii and the Xbox360. The reason there is no clear winner right now is because the two focus on fairly different markets. On the one hand you have the 360 and the PS3 which try to be impressive, powerful, HD capable hardcore gaming machines. Nintendo however decided to just not even compete at all with them this generation and sell what is basically an over-clocked GameCube and focus their attention on a new controller paradigm with the "wii-mote." Nintendo's focus on casual gaming and making their system more accessible to those who may have never owned a system before really helped the system flourish. However, as while the system has technically done very well as it has sold about as many as the 360 & PS3 combined have...it's not selling many games. And what good is a game system if no one really buys any games for it? Also another problem for the system is that it has not aged well. As the Wii was very similar to the GameCube (in both design and power) developers had already maxed out it's potential within its first year of release. More importantly though is its lack of HD capabilities. When the Wii launched not many people had HDTVs yet, but today they have become commonplace in the US and it's getting rare to find someone that doesn't have a TV that pushes at least 720p. This makes the Wii with it's max of 480p (anamorphic) look awfully "jaggie" and/or blurry to someone who's grown accustomed to a higher resolution.<br /><br />Sony went the opposite route in trying to make the PS3 the premium game console aimed almost strictly for hardcore gamers. This would have probably worked, if the system was even noticeably more powerful than the 360, which came out a year before it did. Hardcore gamers always want the best of the best experience (just look at how much money PC gamers spend every year on hardware), but even they have a tough time telling the difference between the graphics, physics and AI on the two systems. Top on then the fact that PS3 has always been the most expensive, mainly thanks to the inclusion of BluRay and has stayed above $300 until just recently (which many have claimed to be the most you can possibly sell a successful console for). There are many other things I could say about the PS3, but they're beyond the scope of what I'm discussing here today.<br /><br />I would call the 360 the most successful console, even though the Wii has sold more systems, as they sell more actual games. First off they beat the other two to the party by a year and established a first mover advantage. But, more importantly I'd say is that they focus on the entire gaming market. Nintendo may be content with the casuals, and Sony tries really hard to win over the hard-core, but Microsoft being who they are want the entire pie to themselves. Luckily here, unlike the PC realm, this benefits everyone as it causes them to compete more aggressively. Almost every 3rd party developer I've read about says that the 360 has the easiest development environment of all the systems. Nintendo has the middle here, but they've done what it always has done and that is focus on selling their own games, and seemingly could care less if any 3rd parties are really successful or not. And Sony, from what I understand, has the most difficult of them all...which they also had in the last generation, but they could get away with it then as the PS2 had a huge market-share advantage that they don't have this go round. The other big advantage the Xbox has is the Xbox Live service, which some would say is the number one thing they have going for them. Sony's PlayStation Network still seems to be playing catch up and Nintendo's online services are all but non-existent.<br /><br />Anyway, enough about the past/present...what will these guys be doing in the future? Next Christmas Microsoft and Sony will both be releasing their own motion control systems to finally compete directly with the Wii. Microsoft has opted to go completely controller free with it's Project Natal (codename) using a 3D camera system and motion recognition software, and Sony seems to be trying to find some sort of middle-ground with a very complex remote/pointer combined with the eyeToy. This has grave implications for Nintendo. Also, as while I'm not so sure about Sony, Microsoft has been trying very hard to cater to Nintendo's userbase with the addition of more casual/arcade games and even an Avatar system that is ridiculously similar to Nintendo's Mii system.<br /><br />With the 360 and PS3 directly competing with the Wii now, Nintendo will be forced to respond. With it's competition capable of much more advanced graphics, let alone a 1080p resolution the appeal of the Wii will become almost non-existent unless you're wanting to play a first party game. As while Sony announced a lofty 10 year life cycle for the PS3 and similar expectations are unofficially expected with 360, Nintendo never made such claim. They're practically the originators of the 5 year life cycle so if they follow their past tradition we should already be expecting a new system in 2011. This will also give them just enough time to see what their competition's final motion systems look like in case they want to borrow something for their next system. I would be absolutely shocked if we do not see a new Nintendo system just in time for the 2011 Christmas season. I wouldn't be that surprised if it even showed up next year as a supposed refresh called "WiiHD" or something of that sort, that merely builds on top of what the Wii has done but adds the ability for 1080p (at the very lease 720p) resolution with a bit more RAM and maybe even a dual-core version of the current processor. They might even market it as if it wasn't even really a new system...but simply an upgrade. Whatever they do, it will have to be able to compete with the 360 and PS3 more directly in terms of graphics. I can only hope they do it right and give us something that actually surpasses them, more importantly in the realm of physics, which matter alot more when you are quasi-interacting with them somewhat physically rather with simple button presses. They will also have to have a hard drive built in, there really is no excuse anymore. But, at the end of the day...even if the next Nintendo system is on par or even a little better than the 360/PS3 it won't matter long when the other guys release their next systems.<br /><br />Next up I'm going to talk about the PS4 and the next-gen Xbox at the same time, as they're already so similar. As some people have trouble differentiating between the graphics of a PS2 and a PS3 or an Xbox vs an Xbox360, they're all really gonna have to try hard to give the average gamer a reason to upgrade. First off, games need to run at 60 fps rather than the 30 we've grown accustomed to as good-enough. Next up 1080p is here, and gamers expect not just 1920x1080 resolution, but good anti-aliasing on top of it. As Microsoft required a minimum of 4xAA@720p with the 360, MS needs to require 8xAA@1080p on their next system. Next up more dynamic animation systems need to become the norm. A few games played around with the first generation of this via Natural Motion's Endorphin, but now it's time to take it to the next level, which faster processors and more RAM will certainly help with. Not only do canned animations need to be all but done away with, but we need actual muscular/skeletal simulation systems like they use in pre-rendered CG movies. Obviously much more advanced physics and particle effects will help too. The next thing that needs to become the norm is the use of High Dynamic Range (HDR). 24-bit color scenes with 32-bit shading and textures (sometimes not even that) might fly for now, but the next gen needs to take advantage of 64bit rendering techniques and the 30 & 36 bit color HDTVs many people now have sitting in their living rooms. Another big thing will be real 3DTV. As Sony's already talking about doing it with the PS3, the next gen will have to be able to support your new Shutter Glasses, Vertical Stereo Prism LCDs or maybe even some Video Goggles. <br /><br />Now all these graphical details are nice, and in my opinion should be pretty much mandatory for any new systems that launches as of you reading this, but what will really take things to the next level will be Real-Time Ray Tracing (RTRT). People have been doing experiments with it for years, but now it's almost within our grasp. If you have a cluster of PCs you can already do it. I've seen one demo that uses 3 PS3s, so obviously a triple-core Cell processor (with 21 SPEs) should be capable of it...but triple-core? Isn't that what the 360 already has? And many PCs are now coming with quad-core CPUs (6 & 8 core as soon as next year)...so obviously they'll have to out do that...right? I also expect both Sony and Microsoft to have new systems no earlier than 2012, and no later than 2014.<br /><br />I have a big hunch Microsoft will return into the arms of Intel with it's next system. It tried to make it's own custom chip with the 360, but from what I've read it hasn't saved them nearly as much money as they had hoped as commodity chips' cost have dropped much faster. I expect the next Xbox to have at least a quad, if not an octo-core Intel i9 CPU. However, I could be wrong...like I said I have no actual information to point me to this. They could very well license a new 6-core version of the IBM chip they have plus some of the Cell's SPEs (vector co-processors) tacked on. Anyway, assuming they go the Intel route my gut tells me they will, I could also see them easily go for using the upcoming Larrabee GPU too. Intel's still-unreleased Larrabee can run traditional rasterization (Direct3D/OpenGL), but it really shines when used for RTRT. I also expect Microsoft and Sony to finally come to some sort of agreement so that the Xbox will finally have a BluRay drive in it too.<br /><br />Now Sony will almost definitely go with a new revision of the Cell processor, as they spent so much time and money helping develop it, I don't think it will be too easy for them to abandon it, especially after they failed to really make any money off it this generation. If they do they'll certainly want a more powerful setup, but luckily the Cell has always been designed to scale up for larger multi-core configurations. I'd expect about somewhere between a 4 to 8 core configuration here as well. And as while the PS3 had 7 SPEs, I couldn't see this one having any less than 16, probably more like 32, maybe even 64 as Sony will really want to try to do a better job of one-uping Microsoft on having the more powerful system. Then they'll probably put another Nvidia GPU in the system, maybe even a dual-core there too. Now on the flip-side I did see a rumor recently that Sony's courting Intel to let them have the Larrabee GPU exclusive to the PS4, so maybe I've got the two backwards? I do seriously doubt though that Sony would use an IBM CPU paired with an Intel GPU...if someone goes Larrabee, they'll most likely be using an Intel processor too.<br /><br />Now what I'd really love to see here is Nintendo come and surprise us out of left field using AMD's next generation CPU that has GPU/vector processor cores built-in to it all in one chip...but I'd call that one a long shot as I seriously doubt it will be ready in time for the timeframe I'm expecting the next Nintendo system out by...but they have surprised us before I guess. I completely expect them to stick with the Wii-mote concept, but maybe they'll come up with something even cooler, like real VR Gloves (not that PowerGlove bullsh*t)?<br /><br />One last thing...RAM. Both the Xbox360 and PS3 have a total 512MB of RAM available to both the system and graphics card. The 360 has an additional 10MB embedded in it's GPU (think sort of like how the CPU has L2 & L3 cache). Then the poor Wii only has 91MB of RAM. As the average PC today comes with 3 or 4 GB of RAM and most video cards have a gig of their own RAM too, I'd expect both the 360 and PS3 to have about 4GB total (shared), and I really hope Nintendo's next system has at least 1GB, if not 4 like the others.<br /><br />Regardless of what happens...the future is as always, exciting...and it's gonna be all about even higher quality graphics that makes you glad to own a HDTV; multiple-core processors that can do much more realistic physics and better AI, more focus on internet connectivity whether that be in multiplayer or direct downloads...and if I'm right, it'll all start to show up pretty damn soon. I expect to see a new Wii in 2011, followed by a new Xbox in 2012 and then a new PlayStation in 2013. Personally, I'd like to see everyone work together on one <a href="http://opengameconsole.blogspot.com">unified standard</a>, but for all we know Cloud Gaming may swoop in and eliminate the need for even needing much of a console in the next 5-10 years.Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-53218794136886133202009-07-24T14:37:00.004-05:002009-07-24T15:27:08.244-05:00Google's ChromeOS: O3D Integration Changes Everything...Ok, so like many I've been extremely skeptical ever since it was first announced that Google was planning on building their own Operating System centered primarily around Chrome and the web. Obviously this would be fine for a netbook, where a browser's all you really need, but when they claimed people would use it on their desktops too, that's when I was confused. People expect a desktop (or full fledged laptop for that matter) to do alot more than just browse the web, especially when it comes to multimedia and gaming.<br /><br />Well yesterday it was <a href="http://news.cnet.com/8301-17939_109-10293207-2.html">announced</a> that the Chromium developers plan to make <a href="http://code.google.com/apis/o3d/">O3D</a> a built-in standard component for a future release...probably Chrome 3 I'd be willing to bet, and probably the same version that will show up in the first edition of the Chrome OS. That's when it finally clicked...this could change everything. As I researched the topic today I stumbled also across <a href="http://news.cnet.com/8301-17939_109-10227150-2.html">Google's Native Client</a> (NaCl for short, so should I just call it Salt from here out I guess?) which looks to add faster performance than a javascript engine will ever be able to accomplish, but more importantly could add the ability to use other languages like C/C++ or maybe even Python (my personal fav) right in your web apps.<br /><br />And now it's all coming together. A year ago we all questioned...why is Google bothering releasing their own browser? Why not just work with Firefox? Then a couple weeks ago we once again thought...why is Google bothering creating it's own OS? They could just work with Ubuntu? When they announced O3D, I thought to myself...oh, neat...that could be really cool one day. But, now I've finally gotten a glimpse of the big picture. Combine all of this together and they just might be able to pull off things we never thought could happen, at least not any time remotely soon.<br /><br />With Chrome having this new Native Client ability in combination with O3D (not to forget HTML5 audio/video support too), you might actually be able to make the next generation of web apps really compete directly against native desktop apps...and this makes the concept of a Chrome OS suddenly much more feasible... If you could play Xbox360 and PS3 quality PC games right in your browser, if you could have silky smooth GL powered interfaces for web apps...it all gives things much greater potential than what the ol' Web2.0 & AJAX revolution a couple years back have provided us with so far.<br /><br />Not only this, but since all 4 of these projects are open source, it won't be limited to just Google. Unlike Flash & Silverlight, these technologies will be able to be modified to work really well across numerous operating system and hardware architectures, and be used by other developers and products beyond just Google. Java's new JavaFx platform was looking potentially promising at one point, but as there's yet to be any code released to the public (as far as I'm aware) and with all the uncertainty surrounding Sun's acquisition by Oracle that may never come to fruition. Now Mozilla's also working on some similar technologies, but I'd be willing to wager in a couple years Mozilla and Google will take these new 3D and local/native abilities to the W3C for inclusion in HTML5+1 and find a common ground.<br /><br />One more interesting concept to also consider is the kind of services all these upcoming <a href="http://lmgtfy.com/?q=cloud+game+console">cloud gaming services</a> are planning to offer. The more you think about it, the more feasible the concept of only having thin-client cloud computers for everything becomes. All we need now is for our pesky ISPs to pick up the pace with some more bandwidth, and more importantly: much less latency.<br /><br />This is all very exciting to think about, but we have to remember to still take it with a grain of <a href="http://en.wikipedia.org/wiki/Sodium_chloride">NaCl</a> as it'll still take quite a few years for these new things to develop and take off with the web development community. Also don't forget that Google will probably have some pretty stiff competition from Microsoft and Apple who obviously won't easily relinquish their current power over general computing. However, the once dreamy picture of a cloud filled future seems to be less a question of if, and now just when? It might just be sooner than we all thought ;)Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-35464226046746525702009-07-01T14:57:00.003-05:002009-07-01T15:55:37.989-05:00Firefox 3.5 Benchmarking in LinuxSo I know there are tons of benchmarks already out there about the just released Firefox 3.5, but most of them are Windows focused. So if like me you run a Linux OS there's still a little more to know perhaps?<br /><br />I know most benchmarkers prefer to do everything very cleanly with nothing but the browser running and maybe even a fresh reboot for each test...but I'm doing this more real world with other programs running in the background and no reboots. Just for reference though, none of the other programs running were changed during the tests to maintain some semblance of scientific objectivity.<br /><br />I run Ubuntu 8.04 (LTS) still, and so the only official Firefox available to me thru Canonical's repositories is FF 3.0.11. In theory this one should be optimized specifically for use with Ubuntu, but as others have pointed out in the last few months, FF seems to run a bit slower on Linux than Windows of all things. Most believe this to be a matter of optimization, so I am trying out SwiftFox for the first time today too. The version I'm testing is 3.5rc3 and optimized to my Pentium D (prescott, 32bit) CPU, so supposedly it will be faster than the one Ubuntu has shipped and the vanilla binary from Mozilla's website, which is where I got the copy of 3.5 I'll be using for this test. And then on top of those three I'm also going to run the same tests on Google's new Linux alpha version of Chrome (ver:3.0.190.2).<br /><br />So first off, just for curiosity's sake, let's see how they all do in the <span style="font-weight:bold;">ACID3</span> test:<br /><br />Firefox 3.0.11: 72%<br />Firefox 3.5.0: 93%<br />SwiftFox 3.5.rc3: 93%<br />Chrome 3.0.190.2: 99.9%<br /><br />As expected Firefox 3.5 and SwiftFox get the same score as they're technically the same version of the rendering engine. Chrome actually says 100/100, but then it says "linktest failed" below that and there is a big X in the top-right corner, so I've marked it down to 99.9% as I don't know what that really constitutes a score of.<br /><br />Next up, we'll be running Google's own <span style="font-weight:bold;">V8 Benchmark</span>, and the scores are rather surprising.<br /><br />FF 3.0: 119<br />FF 3.5: 194<br />SF 3.5: 230<br />Chrome: 2492<br /><br />So, first off the difference between Chrome's score and the other Mozilla based browsers is almost ridiculous. As Google created this benchmark themself, it almost makes you suspicious if they specifically put in tests they knew V8 would handle better than TraceMonkey? So I'll take that portion with a grain of salt. Also these "points" don't have any real intrinsic definition as to how they are calculated, more being better obviously. As for the other three, FF 3.5 doesn't even score twice as high as 3.0 did. Swiftfox however finally proves that it is indeed much better optimized that the other 2 by scoring almost 40 points more than the vanilla build.<br /><br />Last we'll go with the tried and true <span style="font-weight:bold;">SunSpider</span> benchmark provided by the WebKit team.<br /><br />FF 3.0.11: 5,583ms<br />FF 3.5: 2,421ms<br />SF 3.5: 2,111ms<br />Chrome: 986ms<br /><br />Here FF 3.5 shows it is clearly much faster than 3.0 by completing the series of tests in less than half the time it took it's older brother. Once again SwiftFox shows us it is certainly faster than its not-so-well optimized cousins. And here Chrome really shines with completing the tests in less than a second! This is over twice as fast as Swiftfox, so maybe that V8Bench score wasn't as artificially bloated as I thought?<br /><br />There are obviously many other browsers I could have included here, but decided not to bother with. When Konqueror finally switches to WebKit and Squirelfish it may be worth writing about, but for now is last decades' browser. Projects like Midori are also much too new to worry with at the moment. Safari nor IE have native ports for Linux, so they're obviously right out. And last there's Opera... This is a philosophical question for most, but I refuse to use a proprietary browser when there are open source options just as good if not better, so I personally could care less about Opera at all, that and their javascript performance is still quite far behind the likes of TraceMonkey, SquirellFish & V8 from what I understand.<br /><br />To conclude, Chrome is still extremely alpha at the moment, with no plugin support so it's not really viable for daily use. With no Flash nor HTML5, half the web quickly becomes unusable, so we'll see what the situation is looking like when they finally make their first stable release for Linux based OSes. For now, I know I'll be using <a href="http://getswiftfox.com/">SwiftFox</a> from now on ;)Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-52000015436638513892009-04-29T16:04:00.006-05:002009-04-29T16:30:52.806-05:00Death & TaxesThe US tax system is really screwed up. I technically work for an accountant (computer biz I work for is a side company of his), but I am quite far from an expert in such things to be upfront about it. Still...<br /><br />A business gets to right off any kind of operating expenses when doing their taxes, this can be anything from paying employees and the power bill, to advertising and "business meetings" at fancy restaurants. However, you don't get such liberties with your personal filing (as far as I'm aware). In theory, I should in all fairness you would think, be able to write off all my rent, food, power bills, gas, clothing, etc...as they are all operating expenses of my daily life.<br /><br />I'm curious if I could get away with creating a small LLC called Derick's Life or something, and have it registered as the owner/operator of all my personal expenses. I could even go so far as to contract myself so that I am employed by Derick's Life, LLC and then my real employer pays a contractors fee equivalent to my current pay rate. Then DL corp could pay me officially only what's left of my paychecks after all my bills and living expenses are paid for and budgeted out. Now, at the end of the year Derick's Life, LLC could list my rent, doctors bills, phone bills, car note, etc as expenses (even my sub-paycheck I receive from the LLC ..aka.. spending money/disposable income) and a gross profit of whatever my real employer paid for me to work there.<br /><br />In theory the company should have to only pay a very small pittance in taxes, compared to what I might have to. Now of course this rant is all silliness in the end as I'm actually getting a refund this year, but still...if I were to actually make the average middle-class wages (which are double what my wife and I both made combined in 2008) I wonder if one could be better off in such a scenario. To sum up my sentiment here if you haven't already inferred it on your own...it seems like the US government give businesses and organizations a better deal than individuals when it comes to taxes...and I just think that's totally backwards of the way it should be ;)<br /><br /></rant><br /><br />Addendum: Yes, I know you can itemize your deductions for personal taxes, but I'm pretty sure things like food, rent and insurance don't count....otherwise why would politicians have even been talking about tax credits for insurance in this past fall's election?<br /><br />PPS: I'm an odd Libertarian....I guess you could call me a left-leaning, moderate libertarian, as I'm not against taxes completely...I just think they should be minimal and fair ...aka... the poor to lower-middle class shouldn't have to pay any, and the same goes for very small business ;)<br /><br />I may post my theory on what I consider to be a fair "Dynamic Tax Graduation" in a future we<span style="font-weight:bold;">b log</span>...Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-13150342709999639562009-04-04T21:34:00.002-05:002009-04-04T21:48:45.624-05:00To blog or not to blogI honestly have never been a big blogger. I guess putting a big long winded rant or article up where everyone can read it, makes me feel compelled to put a lot of time and effort into it, as well as the general obsessing over wording and typos. But, now that I've been <a href="http://twitter.com/zephyrxero">tweeting</a> for the past 9 months it seems more apparent what the division is. Most of the time I, and probably you too, want to express an opinion/ideas about something, but don't feel like putting all the effort into a big long blog post...so now with the magic of "micro-blogging" we can say our opinion simply, not worry about explaining ourselves so well, and move on.<br /><br />I'm honestly tweet crazy these days, I check for Twitter updates about once an hour if not more frequently. I post things constantly there, and I don't worry so much about typos and such since I'll just end up posting something else pretty soon thereafter. It's really kinda freeing. Somehow the limitation of characters has helped me be able to express more than a blog ever did. For example, I haven't really said too much in this post yet the total character count is 1425. I'm thinking that blogs now can be reserved for the few topics that actually need a lot of explaining and going on about, and 99% of the time we can just spurt out whatever we really wanted to say in a little status update.<br /><br />PS: the "statusphere" sounds much more clever to me than "blogosphere" ever did ;)Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-1277309054672784432009-04-04T19:10:00.004-05:002009-04-11T19:35:32.964-05:00Preditions for Terminator: the Sarah Connor Chronicles (season 2)I'll try to be as spoiler free as I can with this, but if my predictions turn out to be true...then I guess that would technically make them spoilers too ;)<br /><br />So I've been thinking this for a while now, but what "Ms, Weaver" said to Ellison in the elevator in episode 20 makes me almost certain of it now. Through out the series they have hinted and alluded that perhaps there are some computers/robots/ai that wish they could find a way to live in peace with the humans in the future. I believe Weaver/T1001 was sent back to create a new AI that would be able to stop SkyNet from starting the war. The only way to beat a malevolent AI, is to make a benevolent one with empathy and compassion that will defend us. This is what I think John Henry is meant to be. This is why they have now revealed his "evil brother." That AI, which is apparently already killing people in full force and mighty powerful, is the one that will become SkyNet. John Henry's purpose is to prevent SkyNet from ever setting off Judgement Day. This would also explain why Weaver has shown no interest in looking for the Connors.<br /><br />Unfortunately, what will probably happen is that Sarah, John and/or Cameron will kill John Henry by the end of the season, only to learn afterward he was meant to be their protector.<br /><br />If I'm right, I'll be extremely happy with this narrative-wise. It has always bothered me that people just assume a SkyNet/Matrix like future is inevitable if we create true AI one day...but no one ever thinks about how to prevent it, offering a solution rather than just an empty warning. iRobot (the recent movie, not neccessarily the book as I've yet to read it) kind of hinted at this before, but I think it needs to be explored more. Why do humans not kill all the "lesser" life forms below us? Sure we need food, and we have the forethought to know we couldn't live without some of them...but the real core reason is empathy. It's built into our dna. When we empathise with things, when we humanize things that arent human, we give them some sort of importance. We value their life. So if we're going to create real AI someday, even if it's hundreds of years from now...empathy must be a fundamental feature of that program. And perhaps the writers of TSCC have realized this too?<br /><br /><span style="font-weight:bold;">Update 2009-04-11 (Spoiler Alert!):</span><br /><br />Ha! I was totally right about Weaver and John Henry....and I'm very glad to see they didn't kill him off like I was expecting. I also just realized that Weaver is the T1000 that was in "the box" on Jesse's sub. Now where they've taken it...wow, I have no idea what's next. It's looking like we get to see a future without John Connor. I'm loving this show right now, I really hope Fox doesn't kill it offAnonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com1tag:blogger.com,1999:blog-14730839.post-39340655409007985032009-01-03T12:30:00.005-06:002009-01-03T14:13:41.058-06:00Netbooks, UMPCs, and the like will converge with the new smart phonesI made a little <a href="http://twitter.com/zephyrxero/status/1091797852">tweet</a> about this yesterday, and apparently I need to expound upon the idea a little from all the negative response it got.<br /><br />In the next couple years, maybe even <span style="font-style:italic;">this year</span>, the niche market that <a href="http://en.wikipedia.org/wiki/Netbook">netbooks</a> and <a href="http://en.wikipedia.org/wiki/UMPC">UMPCs</a> have been working to fill will shift to the new generation of smart phones including the iPhone and the <a href="http://www.androidg1.org/">Android G1</a>.<br /><br />Next week at <a href="http://www.cesweb.org/">CES</a>, it is undoubted that many new phones of this class will be announced and shown to the world. Motorola is already working on <a href="http://linuxdevices.com/news/NS8856689287.html">moving most of their new high-end smart phones to the Android platform</a>, and surely other companies will be doing the same. This new class of phone is hard to define right now, but lets just say the iPhones and the G1 are pretty much the only phones that qualify at the moment. The Blackberry Storm tries to do it, but from reports I've seen it doesn't quite make the grade and worthless little knockoffs like Samsung's Instinct are definitely out. Eventually though, a common name will come up for the class, we may even want to go back to the dated term "pocket pc" at some point, or perhaps just MobilePC would suffice since apparently outside of the US most people already refer to cell phones simply as "mobiles."<br /><br />Now, the primary purpose a netbook or UMPC is supposed to fill is to allow you full fledged web browsing on the go. Some may say the UMPC never really took off here, and for the most part you'd be right, but Nokia's N700 and N800s have sold fairly well, even if the more powerful (and more expensive ones) didn't. Clearly the iPhone and G1 already meet this requirement, albeit a little slower with the current gen's lackluster processing power and RAM. The only thing they are both missing is Flash support, which for many sites is an absolute requirement. I'm fairly confident this will change in the next year, if not for the iPhone, then definitely for Android powered phones.<br /><br />The next thing netbooks/UMPCs are used for is email/chatting/IM, which both the iPhone and G1 already have numerous options thanks to their app repositories. Some would argue Android is lacking here with no support for things like Microsoft's Exchange server, but businesses aren't really the ones buying netbooks and UMPCs in the first place.<br /><br />Some people use their UMPCs and netbooks for multimedia (primarily video or music playback) and obviously Apple's iTunes/iPod inspired iPhone does this to a pretty decent extent assuming you are using the Apple approved file formats. I'm not sure what the state of this is on the G1 right now, but I'm sure it can hold it's own.<br /><br />Now, anything beyond these features really goes outside the scope of what these devices are being sold for. Yes, you can take notes and write papers on the netbooks, but forget about it on a UMPC. In fact many people still complain the keyboards of most netbooks are still too small to do any "real" typing. In the case of our smart phones many people are already working on solutions to the problem, <a href="http://www.engadget.com/2008/12/31/diy-external-iphone-keyboards-get-a-tad-more-practical/">although they're still not quite there yet</a>. I personally think a nice <a href="http://yourgadgetgeek.blogspot.com/2008/05/usb-mini-roll-up-keyboard.html">roll-up keyboard</a> would be the perfect accessory for these phones if they had proper USB ports and support from their OSes.<br /><br />Another thing people complain about is that these phones' screens are still too small, and I guess that's really subjective. I currently own an iPhone, and 9 times out of 10, the screen size really hasn't been a problem for me like I thought it would. There are rumors right now of 4" and 6" models of the iPod Touch coming out this year, so if that turns out to be true, Apple will probably try the same with their 3rd generation of iPhones too (current model is just over 3"). Just remember, at the end of the day, no matter how big we would like these screens to be, at some point it stops fitting in your pocket. I think 4"-4.5" is the sweet-spot personally.<br /><br />Now obviously these smart phones can boast they have things most UMPCs and netbooks (not counting what are referred to as ultra-mobile notebooks....that's a different class) can't do, like play videogames, work as a camera/camcorder (jailbraking required <a href="http://www.iphonehacks.com/2008/08/iphone-cycorder.html">in the case of the iPhone</a>) and of course, making phone calls.<br /><br />No matter how much we fight it, people love convergence. The ability to carry just one device in your pocket is great. Just a few days ago on New Years Eve I saw one of my friends carrying around both a <a href="http://www.blackberry.com/">semi-smart phone</a> and a separate digital camera and it's sad because you would think that'd be the easiest thing to add to a smart phone. There are already some cell phones, like the <a href="http://www.cellfanatic.com/2008/08/25/nokia-n82-review-live-and-uncut">Nokia N82</a>, with nice camera abilities, but they're still too expensive. My current gen iPhone's camera is damn near pathetic at only 2 megapixels, no flash, no zoom...but I'm sure the next revision will have to finally try a little harder in that department if Apple knows anything about what its customers want. And with the diversity Android brings to the table, there's no doubt a few companies will get it right.<br /><br />The only other hurdles right now are price and wireless availability, and I don't think these will be much of a problem for too much longer. In fact this is the one part of my prediction I'll be so bold to say will definitely happen this year... AT&Ts 3G service is already plenty fast, and many other companies are already working toward <a href="http://en.wikipedia.org/wiki/4G">4G</a>. Sprint already started rolling out their WiMax service in a couple test cities last year, and hopefully we'll see AT&T and Verizon do something worthwhile with the <a href="http://en.wikipedia.org/wiki/700_MHz_wireless_spectrum_auction">700Mhz spectrum</a> they bought up almost a year ago now. Even here in rural Mississippi, there are many places where you can not get a broadband land-line, but you can pick up a decent signal for broadband wireless (3G/EVDO/etc...but I don't count Edge, it's pathetically slow).<br /><br />In the price department, we were already close in 2008. The iPhone 3G sells for $199.99 and the T-Mobile/HTC/Google G1 sells for just $179.99 (with a two year contract of course). This year, I can almost guarantee you'll find someone drop these guys down to $100 or less, and that's when these things will start to go mainstream. Remember when the Razor came out and people ooh'd and aaahh'd, but no one bought them because they were $500 at launch? Then when they dropped below $100, everybody and their momma (literally) had one. Expect the same thing to happen when iPhones and Androids hit that price point.<br /><br />At the end of the day, people just want a small portable computer that they can get online with no matter where they are, and while the netbook took on that mantle briefly this past year, I think they've already hit their saturation point and the new gen of smart phone WILL fill that need in 2009. If these phones aren't capable of doing what you want/need, then a netbook or UMPC wouldn't have cut it for you either... I can't wait to see what new unknown goodies get announced next week ;)Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com1tag:blogger.com,1999:blog-14730839.post-19088305369743935312008-11-10T22:22:00.007-06:002008-11-11T00:12:52.290-06:00Deter videogame piracy, rentals and used sales with your pricingAlot of big game companies have been <a href="http://games.slashdot.org/article.pl?sid=08/11/11/006202">whining and complaining</a> about the rise in game rentals and used game sales over the past few years. They've been worried about piracy for much longer. All of these problems are directly related to the overinflated pricing of the average videogame.<br /><br />Now I've been making this argument for over 10 years now, but apparently it's time to go over it again. All these problems are never going to go away as long as the industry keeps trying to treat the symptoms rather than eliminate the causes.<br /><br />There are many reasons why people don't buy brand new games, whether it be way of buying used, borrowing from a friend, renting or piracy. In this post I'm going to focus on pricing, but some other reasons are poor quality, or perceived questionable quality. By that last bit, I mean how many gamers don't really know whether a game is good or not and don't want to risk their hard earned cash on it unless they know it's good beforehand. Besides the obvious solution of focusing more on quality (aka fun), offering free demos would easily alleviate the fear of buying a game that sucks. That being said, most demos I've played in the last couple years barely even give you a whole level to explore, so the gamer still isn't sure if they like your game or not even after playing your pathetic excuse for a demo.<br /><br />So onto the easiest, quickest way to work on a solution: pricing. To put it frankly, videogames cost too much. Sure, there are a hundred and one excuses as to why they cost so much, but just like your business gamers only care about the bottom line. $50 is too much for a game, and $60 is absolutely too much for a game. The game industry loves to brag and boast at how much gross revenue they are bringing in in comparison to the music and movie industries; however, if you look at the actual number of products sold it still pales in comparison.<br /><br />This summer the movie industry was disappointed in the new X-Files movie's opening box office sales of only $10.2 million in its opening weekend (just as a note, I'm not picking on the X-Files for any reason other than it was the first result for 'poor box office'). So, to do some quick anecdotal math...if each ticket was sold for $10 (which in my town they're more close to $7) then that means over a million tickets were sold. And that's just the opening weekend, I would say it's safe to assume that they continued to sell more after the opening weekend, and when it comes out on video a few more million will be sold. Now, in the movie industry this is considered to be so-so, or maybe even a failure; however, if a game was to sell a million copies or more it would be considered a blockbuster!<br /><br />Whether we like to admit it or not, gaming has still yet to become mainstream, and I'd wager the number one reason is price. The rising budgets of $5-10 million for a AAA game's production are given as the primary reason the average Xbox360 and PS3 game cost $60. According to <a href="http://en.wikipedia.org/wiki/The_X-Files:_I_Want_to_Believe">Wikipedia</a> the X-Files movie mentioned before had a budget of $30,000,000. That would be an outrageous budget for a game, but the industry still wants to play the numbers like they're more successful than the film industry. Now also, the Wikipedia article claims the movie has grossed over $65 million so far simply from box office. So how come this supposed flop has been seen by over 6.5 million people before it has even been released to video...while one of the industry's most popular games of recent, <a href="http://en.wikipedia.org/wiki/Halo_3">Halo 3</a>, has just only sold a little over 8 million copies? Seriously, there are over 6.7 billion people on this planet, yet only 1 in a 1000 people have played one of the most popular games ever (now of course to be fair over half of that figure comes from areas that couldn't ever afford to be gamers).<br /><br />Now I'm not an expert in economics, but the basic principals of supply and demand are not hard to grasp. If you want to expand your user base, lower your price. Keep lowering it gradually over time until the market is saturated. With the enormous popularity of DVDs in the past decade, the movie industry seemed to have found a sweet spot of $20. At that price point many people felt comfortable enough to buy movies they had only seen trailers of or their friends had told them were good. People are much more willing to take a gamble of potentially buying a bad movie when there's only $20 at risk. However, with the average next-gen game costing three times that people are going to be 3 times more choosy. It has been suggested that the average gamer will spend $1000 per year on gaming (including consoles and peripherals), however keep in mind that is most likely for hard-core gamers which are still at the end of the day a niche market. I'd bet that for ever 1 gamer you can walk into the living room of and find a big rack of games, there are 10 if not 100 people where you will find a similar rack of DVD movies. Once again, I blame this on cost of entry. Blu-ray players have still yet to take off at prices of $200+ and it will probably not be until you can buy one for $100 that the average American will finally buy one. Now in game consoles, during the 8-bit through 32-bit generations all the consoles eventually dropped to $100 price point or less, but the original Xbox and PS2 have still yet to hit that price and probably never will. Not only do individual game prices need to drop, but so do the consoles that play them. The market has also shown that people do not want to have to choose between competing formats. HD-DVD and Beta both held back the home video industry until a clear winner was picked. Now I may be personally biased...but I see the only solution to the problem be <a href="http://opengameconsole.org">an open standard for game consoles</a>, but my experience in recent years has show that the industry is far from ready to accept this.<br /><br />So, which would a game developer prefer? Sell 3 million copies at $60 ($180M gross) or 30 million at $20 ($600M). I think the benefits are obvious right there..but now, lets get back to the original topic: used game sales. If a gamer can buy a used copy of a game for $30-45 rather than $60 for a new copy, of course they're going to at the very least consider it. However, if your brand new game is sold at $20, why would anyone in their right mind want to buy a used copy for $15? If the game industry is tired of Game Stop and the like selling their games at these reduced prices, then they are going to have to reduce their own prices to compete.<br /><br />Not only would this thwart the proliferation of used game resellers, this would also deter rentals and piracy. Once again, you ask a pirate why they don't buy games very often and I can guarantee you one of their reasons will be the cost is too high. Things like <a href="http://en.wikipedia.org/wiki/Digital_rights_management">DRM</a> and one time use <a href="http://en.wikipedia.org/wiki/Downloadable_content">DLC</a> only attempt to block these people's attempts, but does nothing to stop their motives. In fact it actually drives many gamers who might normally buy your game to download a pirated copy instead. Just look at the backlash to many recent EA PC releases.<br /><br />Now the flip side of the coin is how to accomplish such a massive price drop. Sadly, if you drop your AAA titles to launch at a $20-30 price point quite a few people will see it as a sign your game is not very good. At first the only safe way to do this would be with big name franchises which are guaranteed to sell, like a Madden or a Halo or GTA. New franchises could easily be hurt during this transition. Also, if one company decided to go this route while their competitors stuck with the old pricing, once again human psychology would put you at an unfavorable place, with people almost always assuming the worst. You can't come to an agreement with your competitors to all drop your prices at the same time, or else you might get in trouble for some sort of price-fixing or something. So the only way for this to really happen is slowly, and with big name titles. Of course, this has been tried before to some extent, a few years back many PC games were sold at almost half the price of their console counterparts, but it couldn't stop the fact that hardcore gaming was dying in that market regardless..or perhaps it was just too little too late. Sega tried selling it's NFL 2K5 at $20 to better compete with Madden, but then EA bought exclusive rights to all NFL games so that killed that little experiment as well.<br /><br />Honestly the best place to try out a new pricing level is in the online sales arena (ala Xbox Marketplace, Steam or even the iPhone App Store). However, this still has the problem that to make money at this lower price point we need many many more consoles in potential buyers hands first. So, this is a complicated and potentially dangerous concept at this point really...however, it's going to have to happen at some point if the game industry ever wants to become a truly mass market medium. The same goes for stopping used game sales from out numbering new game sales, which is going to eventually happen if it hasn't already with the current pricing schemes.<br /><br />$20-30 games will hopefully eventually be the norm, but the entire industry will have to work together. The nice part is it's a win-win situation for both developers and gamers alike...it's just a matter of time.<br /><script type="text/javascript"><br />digg_url = 'http://zephyrxero.blogspot.com/2008/11/deter-videogame-piracy-rentals-and-used.html';<br /></script><br /><script src="http://digg.com/tools/diggthis.js" type="text/javascript"></script>Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com3tag:blogger.com,1999:blog-14730839.post-2555561692796222212008-10-23T14:19:00.007-05:002008-10-23T17:38:53.476-05:00Upcoming Browser JavaScript Engine BenchmarksI've seen lots of people recently saying that Firefox's TraceMonkey JavaScript engine blows Google's V8 out of the water...but was a little skeptical so I decided to do some benchmarks of my own. Now with any benchmark, everything here needs to be taken with a grain of salt as performance will certainly vary upon which sites you are viewing. For this test I have used WebKit's <a href="http://webkit.org/perf/sunspider-0.9/sunspider.html">SunSpider</a>. Also, since Chrome and Safari do not have native ports available on Linux right now, I had to do the test under Windows XP. The test machine is dual-core so multi-process/threaded apps should show a benefit, but I feel it's totally fair as single-core machines are quickly going the way of the dinosaur and do not accurately represent the future, which is what we're talking about here. Also as Chrome does not have a stable/final release yet, I've compared with many other browsers' development builds.<br /><br />So let's get to some numbers. On my test machine Chrome completes the SunSpider test with a total time of 2423ms; pretty nice! Ok, now what about Firefox? The current release, 3.0.3 takes 4244ms. Well, what about the new beta for 3.1 that just came out? It scores at 3823ms. So, wait...that's not too terribly much faster than 3.0. Do we have TraceMonkey enabled? Nope. I don't know why, but the default build of FF3.1.beta1 comes with TraceMonkey disabled by default, so after enabling it the results jump down to 1654ms. Wow! That really is faster than Chrome. Ok, so how about "minefield" AKA the nightly trunk build of Firefox (FF-3.1.b2pre-20081023): 1567ms! So yes, yes indeed Firefox 3.1 and the TraceMonkey JavaScript engine are indeed faster than Chrome and it's V8 engine.<br /><br />So what about the other guys? Well, first off let's try Safari which shares it's webkit rendering engine with Chrome. The current release, 3.1.2 scores at 4894ms. So how about the development build? With webkit-r37604 (and Safari/WebKit's new SquirrelFish JavaScript engine): it goes to 1664ms. That's faster than Chrome/V8 too, but not quite as fast as FF/TM.<br /><br />Opera's latest release is 9.6, and I didn't manage to find a development build available for it's next version...probably because it's still proprietary unlike the open source competitors we've discussed so far. Opera scores 5979ms which is slower than both Safari and Firefox's current stable releases, but it's still much better off than Internet Explorer. Internet Explorer 7 scores an abysmal 90522ms. That makes it almost 58 times slower than Minefield. Microsoft, that's absolutely pathetic. <strike>I didn't get a chance to try out IE8, but from what I understand it's not much better off in the JavaScript department.</strike> <span style="font-style:italic;">See update below...</span><br /><br />Here's a chart I threw together in OpenOffice of the results. I omitted IE7 because it was such a huge difference that it made it hard to tell the difference between all the others.<br /><br /><a href="http://www.flickr.com/photos/zephyrxero/2967910438/" title="sunSpiderBenchmarksChart by Zephyr Xero, on Flickr"><img src="http://farm4.static.flickr.com/3029/2967910438_c009278f78.jpg" width="500" height="417" alt="sunSpiderBenchmarksChart" /></a><br /><br />Lastly, just for fun I tested my iPhone and Mobile Safari (running firmware 2.0.2), it took 136081ms. But hey, for a device running an ARM @ 412Mhz with 128MB of RAM vs. a full fledged computer with a dual core processor, and a gig of ram...that's still got to be better than Internet Explorer 7 did. I wish I had a Windows Mobile device to test, and a Android/G1 too, but oh well.<br /><br />So, yes, with TraceMonkey enabled...Firefox reclaims its place as the fastest browser, but Safari and Chrome certainly aren't too far behind either. With this extreme increase in JavaScript performance on it's way...maybe it's time to finally retire the old "web 2.0" buzzphrase and move onto Web 2.1<br /><br /><span style="font-weight:bold;">UPDATE:</span><br />Seems I owe Microsoft an apology for not benchmarking IE8 before. They really have made some progress. The following numbers can not be fairly compared with the above because it was ran inside a virtual machine with only access to a single core and half the RAM, but here they are. IE8-beta2 scored at 9542ms. But, for reference, in the same virtual environment Firefox "Minefield" was still able to pull off 1873ms. So, as while still no where near as fast as the other guys, IE8 is lightyears ahead of IE7 in its javascript performance.<br /><br /><script src="http://digg.com/tools/diggthis.js" type="text/javascript"></script>Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com4tag:blogger.com,1999:blog-14730839.post-87580870346176947392008-08-12T12:53:00.004-05:002008-09-22T13:09:52.749-05:00Guide: How to install Quickbooks Enterprise 8.0 on Ubuntu 8.04.1I recently had the "fun" of forcing Quickbooks Enterprise Server 8 onto an Ubuntu 8.04 server. Intuit only officially supports Fedora and OpenSUSE with thier RPM installer, so I had to work a little Alien magic and do quite a bit of hand editting, below you will find a guide of how to do this yourself. Do note, that a large portion of this guide is derived from an older one located at http://brousch.orthicomp.com/howto/qbent7-linux-server-ubuntu-dapper.html ...the rest comes from reading through the non-functional post install script inside the RPM.<br /><br />1. Login as root (or you can use sudo on each command if you just really want to)<br />2. apt-get install alien rpm lsb lsb-rpm gamin<br />3. mkdir /var/lock/subsys<br />4. ls to directory containing RPM<br />5. alien qbdbm-VERSION.rpm (do not use the "--script" option, Will Fail)<br />6. dpkg -i qbdm-VERSION.deb (for some reason Alien will bump the last number)<br />7. add "daemon.* -/var/log/qbdbfilemon.log" to /etc/syslog.conf<br />8. touch /var/log/qbdbfilemon.log<br />9. /usr/lib/lsb/install_initd /etc/init.d/qbdbfilemon<br />10. /usr/lib/lsb/install_initd /etc/init.d/qbdbmgrn_18 <br />11. addgroup quickbooks<br />12. Add users who will be accessing the samba share to the quickbooks group<br /> - usermod -G quickbooks USERNAME<br />13. Create a directory that your quickbooks data will go in (referred to as PATH from here out)<br />14. chown <your primary samba user>:quickbooks PATH<br />15. chmod -R 770 PATH<br />16. If the path you just created is not already within a Samba shared directory, set it up as one.<br />17. Use your favorite text editor to edit /opt/qb/util/qbmonitord.conf<br /> - Remove the default path and type in the new one you just created<br /> - qbEnterprise does not scan for subdirectories, so if you have more than one directory each one will need to be added on seperate lines in qbmonitord.conf<br />18. /etc/init.d/qbmonitord start<br />19. /etc/init.d/sysklogd restart<br /><br />Now the original guide I went by had a list of commands to create links for putting it into the startup of your system, but I believe the last two commands (taken from the install script) should take care of that for you. However, when I was setting this up, the first time I attempted using the script which failed out, but may have done this for me before failed. So, if it's still not working right after you finish my guide, try going back and doing this too.<br /><br />- ln -s /etc/init.d/qbmonitord /etc/rc2.d/S85qbmonitord<br />- ln -s /etc/init.d/qbmonitord /etc/rc3.d/S85qbmonitord<br />- ln -s /etc/init.d/qbmonitord /etc/rc4.d/S85qbmonitord<br />- ln -s /etc/init.d/qbmonitord /etc/rc1.d/K15qbmonitord<br />- ln -s /etc/init.d/qbmonitord /etc/rc6.d/K15qbmonitord<br /><br />For more info see the Linux Install Guide PDF that was included on the CD<br /><br />If converting an old company file to Enterprise'08, you'll need to make the user you login to the samba share with during the upgrade, the owner of the .QBW, .QBW.ND and QBW.TLG files.<br /><br />Hope this helps...and good luck!<br /><br /><span style="font-weight:bold;">UPDATE:</span><br />Make sure to also create "/var/lock/subsys/" and touch /var/lock/subsys/qbdbfilemon and /var/lock/subsys/qbdbmgrn_18 or you will get some weird errors with the file saying it's in use when it's really not!Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com2tag:blogger.com,1999:blog-14730839.post-30343056929386211172008-08-08T03:22:00.004-05:002008-08-08T04:46:59.136-05:00My Next Smart PhoneSo a lot of my recent posts have dealt with the iPhone...but that's probably because I've spent so much time with it this past month. As while I'm very glad I finally upgraded from my usual cheap/free basic phone (with no interwebs); things are still far from perfect...and I love redesigning/dreaming about all my gadgets and such.<br /><br />So, it's late...and I should probably be in bed right now, so for both our sakes I'll try to not be as wordy as usual...well, at least from here on.<br /><br />1. I want an open source OS, and I want to hack around on it without having to “jailbreak” it. I completely understand Apple's thinking and methodology with it's App Store...however, I should have the option of installing unapproved 3rd party apps (at my own risk of course). This also entails that I want access to a command line terminal and the ability to mount my phone's hard drive just like it were any other external usb drive. My next phone will quite definitely be running OpenMoko/Android/LiMo/OpenSymbian or something like that. To sum it up, I want to be able to do whatever the hell I want to with MY phone.<br /><br />2. <a href="http://zephyrxero.blogspot.com/2008/07/why-isnt-iphones-screen-169.html">I want a true 16:9 screen</a>....not 16:10, not 3:2... standard 16:9, with preferably 480p resolution and OLED rather than LCD.<br /><br />3. This is obvious, but of course I want a more powerful CPU (dual-core would be awesome). I want a much more powerful GPU that can handle playing up to 720p video (Nvidia's new Tegra line claims it can already). More RAM, more hard drive space....you know, the usual ;)<br /><br />4. A much much better camera. I've honestly never had a phone with a camera in it before. However, now that I have one....I want it to be at least as capable as my shitty 8+ year old digital camera (3.1MP). I'm talking at the bare minimum 5 megapixel, if not 10. I want all the settings, like apature and shutter speed. I need a manual focus option, white balance...the works. I need some sort of flash mechanism too. The iPhone's current camera has really shitty capability for taking low light shots. Furthermore, this camera needs to be capable of recording video, not just stills. My wife's 3 or 4 year old Razor can take video (albeit really shitty quality, but still video) why can't my frickin' iPhone? I'm hoping at least 480p resolution too. Zoom would be nice too, but I'll understand if it gets left out... Oh also, have some sort of lens cover built in for when I'm not taking photos (<a href="http://www.mobiletechreview.com/phones/Nokia-N73.htm">Nokia knows what's up</a>).<br /><br />5. A hand editable equalizer. Every set of headphones is different, I really need at least a 5 if not 7 level EQ that I can tweak to perfection rather than just a handful of presets. Side note: I haven't been able to figure out if the “iPod” app's EQ settings are system wide or are just for the iPod app. This is partially due to the fact that thru various setbacks I have still yet to getting around to jailbreaking my phone so I can transfer my music collection over (maybe it'll finally happen this weekend with Pwnage 2.0.2).<br /><br />6. A more repairable design. Handling my iPhone is like handling a hand grenade everyday. I spent hundreds of dollars on this thing...and it could all be over with just one little accidental drop. I really wish I had the capability to replace my own battery or screen in the unfortunate case they were to break. Somehow I just know it that before my 2 year contract is up I'm gonna have to pay apple a hundred or two more to fix/replace mine when it breaks.<br /><br />7. Here's a weird one... slightly bigger. Yes...I wish my iPhone was bigger. I've already discussed how and why I'd like it to be a little longer, but also a little bit thicker too I think. This would give room for a bigger battery most importantly, and as while I don't think it's current battery is nearly as bad as most make it out to be, it could always be bigger. I think because of it's size (and sheen) it just feels way to delicate for a klutz like me to be handling on a regular basis.<br /><br />8. Stereo Bluetooth. I love my <a href="http://www.skullcandy.com/shop/smokinbuds-p-10.html">Smokin' Buds</a> and all, but it'd be really cool to be totally wireless one day...too bad the current iPhones only support mono via Bluetooth :/<br /><br />9. Front side web cam. Not only do I want the main camera on the back to be much better...I want a second one (like was rumored) on the front for video chat. I don't know about you, but I just think that would be awesome.<br /><br />10. Use a standard USB port or something. It just irritates me that Apple has to have proprietary ports on their iPods and the iPhone. Why not just put a mini USB or Firewire port on there? Maybe it's just me, but I think it would awesome to be able to whip out a <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16823166084">roll-up keyboard</a> and start typing away (thus completely eliminating my need for a laptop altogether)...or even think about being able to plug your smart phone right up to a printer, ok perhaps that's just silly but maybe not, I should at least be able to print to a network printer that uses either Post Script or PCL. I don't know... perhaps Apple has good reason for using it's own port, maybe it can handle more bandwidth and or voltage that USB 2.0 or something. I mean, a simple converter dongle could get the job done right now...I'm just thinking what's the point.<br /><br />11. Use more of a matte finish. Honestly, the shiny look of the iPhone is great for press pictures and whatnot, but it just ends up in smudge city at the end of the day. I really wish companies would go for some sort of rugged rubber or something rather than the shiny, brittle plastic. Obviously the glass on the front would have to be glossy...but why the rest of the phone? I've already noticed the cheap chrome plating around the edge chipping and getting rough in not even a month of usage so far.<br /><br />That's about it. An hour later now, it looks like I failed on the whole keeping it simple concept. Honestly, number 1 and 4 are the only ones I really really need...but they'd all be nice ;)Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com1tag:blogger.com,1999:blog-14730839.post-41305742282762357012008-07-20T19:48:00.010-05:002008-07-22T00:28:28.855-05:00Why isn't the iPhone's screen 16:9?So, like many of you... I've been obsessed with my new iPhone this past week. I have grown curious as to why its designers decided to use the screen it did. The more I think about it, the more it would have made sense to go with a 16:9 ratio screen. I don't know what possessed them to go with a 3:2. This device is supposed to become my premier place for viewing mobile video...wouldn't it make sense to use the standard aspect ratio for all things video today?<br /><br />I've done the math (hopefully it's right) and if we keep the same width, the length only grows by about half an inch (approx. 1.3 cm), which I'd be more than fine with myself, and probably everyone else too I'd imagine. I've made a quick little graphic to give you a better idea of what I'm talking about here.<br /><br /><center><a href="http://www.flickr.com/photos/zephyrxero/2687535258/" title="iphone screen comparison by Zephyr Xero, on Flickr"><img src="http://farm4.static.flickr.com/3014/2687535258_bc539cde2a_m.jpg" width="240" height="151" alt="iphone screen comparison" /></a></center><br /><br />Now this makes the screen 4.1 inches rather than the 3.5 it currently is, or just about the same size as the PSP's screen. Assuming they were to keep a similar resolution, it would go from 480 x 320 to 569 x 320 I guess. The 569 sounds a little awkward to me, so I bet they'd probably shift it up or down by 1 pixel. Then again, if a higher resolution puts a strain on their hardware this might make them want to drop the resolution to what the PSP uses (480 x 270). What I'd really love to see is for them to bump it up to EDTV resolution (aka 480p-wide, or 854 x 480). Honestly, I can't see any self-respecting portable media player that has a resolution less than this. Maybe Apple will go this route with their next generation...but by then, I'll probably have moved onto something powered by OpenMoko/Android/LiMo/MobLin/OpenSymbian/etc ;)<br /><br />PS: Haven't gotten around to jailbreaking my iPhone3G just yet, hopefully all will go well tonight...<br /><br /><right><script src="http://digg.com/tools/diggthis.js" type="text/javascript"></script></right>Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com1tag:blogger.com,1999:blog-14730839.post-60467749239652316802008-07-18T00:28:00.004-05:002008-07-18T00:57:18.582-05:00Pandora iPhone UpdateWell, after reading <a href="http://www.roughlydrafted.com/2008/03/13/iphone-20-sdk-the-no-multitasking-myth/">this article</a> and a few others tonight...it's become clear that some of my wish list points are not possible under the terms of Apple's SDK agreement, no 3rd party apps are allowed to run in the background. This means that my #1 feature is impossible, and it kinda puts a damper on some of the others (like for example, opening Safari to look at artist info and things like that isn't nearly so enticing if it means my music has to be put on hold till I come back). Sadly, it looks like I'll have to rely on my own music collection if I want to listen to music while web browsing... This really sucks since I'm still waiting on them to release PwnageTool 2.0 so I can transfer my songs over (Linux user...no iTunes).<br /><br />As another note, although I still wish they could work out a deal to pool their data ;) ...Last.fm now has an app too. I really like those guys, they seem much more open source friendly and focused on community/user generated content...but I never seem to enjoy the stations Last.fm creates as much as I do my Pandora ones. Still, the Last.fm app seems to be a bit more featureful, including a few of the ones on my list...however their caching/lag problem is even worse than Pandora's!<br /><br />So anyway, as it stands...my custom radio stations will have to be saved for music only times as far as my iPhone is concerned :/<br /><br />[<span style="font-style:italic;">Listening to my Pandora station: "Igneous Radio" as I type this...isn't multitasking wonderful</span>]Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-14951832817553517002008-07-14T21:05:00.006-05:002008-07-15T00:13:27.741-05:0010 Features Pandora on the iPhone is MissingSo I did it... I broke down and jumped aboard the iPhone train this past weekend. One of the number one reasons I decided to take the plunge? Internet radio anywhere I go...particularly Pandora. I'm so addicted to Pandora, I almost never listen to my personal music collection anymore.<br /><br />So, when I saw that they released a <a href="http://pandora.com">Pandora</a> app for my new iPhone 3G, I was ecstatic. The app's great...but there are a few things I'd love to see in a future update. The web based version of Pandora has alot more controls for you than the simple iPhone app...which is to be expected (for now). Still...there are a few features I'd love to see added in a future release.<br /><br />1. Add the ability to continue listening to Pandora while using other apps. I'd really like to keep listening to Pandora while I'm browsing the web or wasting time on Facebook. This is probably the number one flaw with the current version of the app.<br /><br />2. I really miss what I call the "snooze button". On the regular flash/web version of Pandora you have to click the "Guide Us" button at the bottom and then click on "(Zzz) I'm tired of this song, don't play it for a month". Sadly, I end up using that button alot more often than I'd like.<br /><br />3. We need the ability to add more songs/artists to our stations via the app. All of my stations are multilayered with fairly long lists of songs I like in a particular genre, however if I create a new station via the iPhone app I only get the ability to use a single song or artist seed. It'd also be handy to be able to do all the same things you can edit on the regular version...if nothing else, maybe allow us to click a button to open up Safari and go to the edit page (assuming feature #1 on my list is fulfilled first).<br /><br />4. It'd be really nice to be able to pull up artist, album and song info from within the app...just as you can access the "why are you playing this" feature.<br /><br />5. Song history, like in the regular version would be nice too...although not absolutely necessary. It sure would be neat to flick my finger to the right and go back and see what I just listened to ;)<br /><br />6. Resume where you left off. It'd be pretty cool if when I closed/paused the Pandora app, I could come back to it later and finish out the last song I had going and go from there. This would be especially handy for when you receive a call while listening...<br /><br />7. The volume control needs to be reworked. Right now it's kinda funky to change the volume. It may just because I have fat fingers or something, but I have to try shifting the volume slider 3 or 4 times before it responds...I've almost convinced myself I have to double click it first to unlock it :P Also...I've discovered I can use the physical volume buttons on the side of the iPhone, but I'm not sure if I'm actually changing Pandora's volume, or just my headphones/speaker volume since the slider does not change/update along with it.<br /><br />8. Do a better job of caching. Yes, I'm sure the iPhone has limited RAM available, however I've got plenty of hard drive space you could go ahead and cache the next song to. On my computer, Pandora instantly starts playing the next track when the current one finishes...but there's quite a noticeable lag between tracks on my iPhone. Now do note, this only occurs when I'm on the EDGE network (3G's not gonna be available in my area till at least October), and there's no problem when connected thru wifi, but once again that's one of the whole reasons I wanted an iPhone, so I could listen to Pandora in my car...and most of America won't have 3G for quite some time.<br /><br />9. Offer EQ options. The iPhone's "iPod" feature has the ability to choose different EQ settings, but I'm not so sure that affects Pandora. If I'm right, and it doesn't...this is another much appreciated feature (of course the web based version could use this too).<br /><br />10. Stream in Stereo. This is another one I'm not sure of currently...but I'm fairly sure the iPhone version of Pandora is streaming in mono, which I assume is to save bandwidth (which certainly makes sense when on EDGE). However, when I'm on wifi, or 3G it'd be really nice to have stereo like we get on the regular version. Also, on the flash based version I'm pretty sure Pandora uses 128kbit mp3 audio, but that's I assume a limitation of Adobe's Flash player, so why not use Ogg Vorbis or AAC in the iPhone App edition where you have more control? That would certainly help lower the bandwidth costs.<br /><br />11(Bonus): Ok, I'm just dreaming with this one...but it'd be awesome if Pandora would look at my Last.fm profile while generating my stations to get even more insight into my tastes ;)<br /><br />Also, of note....I haven't seen/heard a single advertisement since I've been using the iPhone Pandora app. I don't know how they're affording to do this, so if this changes in the future...Pandora, please give us the choice between free with ads and paid without ;)<br /><br />Pandora I love you! Keep up the good work...<br /><br />[<a href="http://www.pandora.com/people/zephyrxero">Check out my stations</a>]<br /><br />Update: 2008-07-15@00:09CDT<br />A couple more I forgot...<br /><br />12. Add the ability to switch accounts. My wife has her own account, and will surely want to switch over to listen to some of her stations during trips and such...<br /><br />13. Progress bar. Much like the regular web version, there needs to be a progress bar to give you an idea of how much song is left...maybe even better if you can some how click on it and see an exact seconds....ie: (2:38 of 4:53) or something like that ;)Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com3tag:blogger.com,1999:blog-14730839.post-68088627149503246342007-08-19T21:43:00.000-05:002007-08-19T21:44:04.493-05:00Textures, Colors, and BitsThere's a lot of talk these days about how much space is needed to store texture data for the latest generation of game consoles. As while the Wii and the Xbox360 have stuck with the tried and true DVD format (4.7GB single layer, 8.5GB dual layer), the PS3 has gone with BluRay which provides 25 GB single layered and 50 in dual layer. Some developers are claiming to be maxing out the potential of the DVD (not to be confused with HD-DVD), however I'm willing to bet that about 98% of the time, this is simply due to poor compression and color palette use (note: I'm certainly no expert on all of this...just thinking out lout here).<br /><br />Even though we now have a small, but growing number of TVs and monitors than can support beyond 24bit color, this does not mean every single texture needs to be stored with such a high color palette. Now beyond the Red, Green and Blue channels, you'll also want to store a "alpha" (or transparency) channel. This will bump some textures up to 32bits. However, giving up 8bits per pixel of your textures to memory can eat up alot of space and be quite wasteful, so some will only use a single bit for transparency, putting them in an either-or situation. This second method while much more efficient in terms of storage and processing required to render it in the scene looks absolutely awful most of the time. You find this 1bit alpha used most often in the textures for leaves, blades of grass, or a chain linked fence...all of which look absolutely awful because they cause so much aliasing that it appears the areas they're used in heavily together causes a shimmering, sparkling effect which can be quite irritating to the eye. Also since most modern anti-aliasing techniques strictly affect the polygons, and not your textures in the scene this becomes even more apparent as it makes these items stand out even more from the rest of the scene.<br /><br />With the ultra powerful CPUs and GPUs found in the 360 and PS3 the processing power needed to render an 8bit alpha vs. a 1bit alpha is almost negligible at this point. However, as I started off talking about, it can make quite a difference in the amount of space required to store your textures, whether it be in memory or the physical storage media. With the higher and higher resolutions of todays textures, multiplied by a growing number of them with modern graphics engines using sometimes 8 or even 16 different textures across just one area on a model, those 7 little bits add up pretty quick.<br /><br />The thing that has always puzzled me is why no one uses anything in between? The difference between a 1bit mask and an 8bit mask is very apparent, but the difference between a 2 or 3 bit mask compared to an 8 bit is not nearly as much.<br /><br /><center><a href="http://www.flickr.com/photos/zephyrxero/1176302355/" title="Photo Sharing"><img src="http://farm2.static.flickr.com/1148/1176302355_65271b5750_o.png" width="215" height="263" alt="side-by-side comparison" /></a></center><br /><br />Looking at them close up makes the differences more apparent, but it also makes it more clear how much we don't really need to use a full 8bit mask for decent alpha when it comes to edge aliasing.<br /><br />First up, is a 4 times magnified close up of the 8bit mask:<br /><br /><center><a href="http://www.flickr.com/photos/zephyrxero/1176302429/" title="Photo Sharing"><img src="http://farm2.static.flickr.com/1354/1176302429_bed7d3069f_o.png" width="400" height="384" alt="doodle-8bit" /></a></center><br /><br />And then look at the 1bit's very visible difference.<br /><br /><center><a href="http://www.flickr.com/photos/zephyrxero/1176302381/" title="Photo Sharing"><img src="http://farm2.static.flickr.com/1320/1176302381_4db66bf556_o.png" width="400" height="384" alt="doodle-1bit" /></a></center><br /><br />However, now look at the 2bit example. It's not quite as nice as the 8bit, but a huge improvement over the 1bit...<br /><br /><center><a href="http://www.flickr.com/photos/zephyrxero/1176302395/" title="Photo Sharing"><img src="http://farm2.static.flickr.com/1035/1176302395_8eda8e1589_o.png" width="400" height="384" alt="doodle-2bit" /></a></center><br /><br />And if we bump that up to 3bit there's even less of a difference.<br /><br /><center><a href="http://www.flickr.com/photos/zephyrxero/1176302407/" title="Photo Sharing"><img src="http://farm2.static.flickr.com/1300/1176302407_2ba98d5bba_o.png" width="400" height="384" alt="doodle-3bit" /></a></center><br /><br />I had also originally made a 4bit example as well, but there was practically no difference between it and the 8bit example at all.<br /><br />Now I want to point out that for all of this I am focusing on textures that have obvious aliasing problems, like a <a href="http://www.elitebastards.com/pic.php?picid=hanners/adaptive-aa/iq/hl2/0x01.jpg">chainlinked fence</a>, or <a href="http://www.ixbt.com/video2/images/g70/aa-aa4x-taa-no.jpg">leaves on a tree</a>. For other effects like smoke, fire, or maybe even hair this may not work as well and you may still need to use 8bit to look good, but then again... maybe not? ;)<br /><br />So, usually a texture map is going to be stored with either with 8, 16, 24, or 32 bits (due to byte addressable memory mainly I would assume). An 8bit texture is going to only give you an indexed palette of 256 colors, or 255 if you use 1 of those colors as your transparent color. If we take 2 of those bits and set them aside for transparency (giving 4 levels of transparency) we now get 64 colors that each can be displayed at 4 levels, or even 5 if you reserve one of those colors for absolutely transparent as with the usual scheme. Generally this is still plenty of different colors for a mainly monochromatic thing like a leaf or blade of grass.<br /><br />However, let's say you need more colors. Let's look at a 16 bit texture, which in most instances is perfectly fine for even high quality graphics. Now we can use an indexed palette like with the 8bit image, and that will follow almost the same principles. Let's look at using a RGB scale instead. Once again, many times people will use 5 bits for each color channel and then the extra bit for transparency [RGBA5551], which gives us the same problems as with the 255+1 color image and aliasing. What if we bump down each color channel to 4bits, and now we have 4 bits for the transparency channel as well (16 levels of transparency) [RGBA4444]. As we've already determined, that's more than enough for the issue we're looking at, but it does cut into your number of possible colors by quite a bit. Perhaps, with a 16bit texture, you'd still be best off to use a static palette, with 2 or 3 bits set aside for transparency? It would probably depend on what type of texture you're working with individually.<br /><br />So okay, let's look at 24bit textures, which rarely have an alpha channel at all. Here the solution seems very obvious, cut your color channels down to 7bits rather than 8, and then use the remaining 3bits for alpha [RGBA7773].<br /><br />Generally however, if you want 24bit color textures, and transparency, you just bump on up to 32bits...which once again, may or may not be a waste depending on what type of texture it is your working with. As most TVs and monitors still can not display anything higher than 24bit color (don't let your Windows display settings fool you guys, there is no such thing as a 32bit monitor), some of the newer displays will actually go up to 30bit (10 per channel) and 36bit (12 bits per channel) which is only available to you if you're using HDMI v1.3+ or DisplayPort1.1+. In such a case (which I doubt anyone at all will even consider till the next generation of systems) we could have a 32 bit texture using 10 bits for each color channel and 2 bits for the transparency, or maybe a 40bit (next byte addressable size up) with 12bits per color channel and 4bits for transparency. However, most developers will probably opt to start using floating point based color channels ending up with 48 and 64 bit textures, which will take even more space...<br /><br />Another interesting point I'd like to point out while I have your attention is that rather than using a 32bit texture for super high quality color representation, why not use a 16bit texture thats twice the resolution or more, yet uses about the same space? When the texture is filtered through mipmapping and various other filters, you probably won't even be able to tell the difference at a distance, yet up close you've got even more detail then before ;)<br /><br />I guess the point of all this is that today's developers don't seem to care to try and be as creative about such problems as they used to be a decade ago. If you think squeezing all those textures onto a DVD is hard, try putting them on a 16 megabit SNES cartridge... If you spend just a little more time thinking about these kinda things, I'm sure that you'll be able to fit just about as much detail into that 9 gig DVD as you have planned to plop onto that BluRay disc. The same goes for audio (<i><a href="http://www.ps3fanboy.com/2007/08/14/heavenly-sword-has-10gb-of-sound-data-alone">10 gigs my ass, that must be uncompressed or something</a>...have these guys never heard of <a href="http://vorbis.com">Ogg Vorbis?</a></i>).Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-63721096137458835372007-07-22T01:50:00.000-05:002007-07-22T02:42:40.871-05:00The GPU's days are numberedI just saw <a href="http://www.custompc.co.uk/news/601119/john-carmack-reckons-physx-is-useless.html">an article with a quote from John Carmack</a> about how he doesn't think there's a real need for a dedicated phyics processor (PPU) like the Ageia PhysX. He says that between the advancement of multi-core CPUs and GPUs they should be able to handle physics just as well in the near future. This is not the first time I've heard this sentiment, but it has brought back an old thought in my mind. How much longer will it be till the GPU suffers the same fate?<br /><br />When dedicated graphics processors (predominantly used for 3D graphics) were first introduced in the mid-late 1990s, there was certainly a need for them. They allowed game developers to create a new level of graphical quality that would not have been possible strictly using the general purpose CPU. However, that didn't stop Intel from developing MMX (multi-media extensions) for their Pentium chip line. The idea was that for those that wanted decent 3D graphics, but didn't have to have the best of the best, their new assembly commands built into their chips would allow for mainstream use of 3D. Some tried, but in the end it just wasn't enough to compete with even a low-end dedicated 3D processor. When NVidia released the GeForce 2, the world's first GPU with the ability to perform realtime lighting and deformations, it was all over. Later cards would introduce the capacity to have fully programmable shaders thus moving the GPU and CPU even further apart.<br /><br />Today it seems the progression of GPUs has started to plateau, just as CPUs had a few years ago prior to the multi-core revolution. It is already predicted that 3d graphics chip makers will begin to follow suit with multi-core GPUs in the next couple years. However, I'm much more interested in another route AMD is planning to take. They have announced a future CPU called the "hybrid." This multicore CPU will also feature an on-die GPU. Details are sketchy at best beyond that...so people like me are left to allow our imaginations to run wild with the idea.<br /><br />Now, AMD has tried to make it clear that the graphical quality of such a setup will be comparable to current on-board IGP solutions from their ATI division. But that certainly doesn't mean things won't progress beyond that in future itterations. Imagine an AMD hybrid chip featuring 4 of their next generation Phenom CPU cores, with an additonal 2 ATI 3D GPU cores...all on one chip. Now think about if they take it one step further and start integrating some of the same features of their GPUs directly into their CPU's cores.... Then instead of the 6-core hybrid setup I described before, now you can have a 4 or 8 core CPU that has built in 3d and 2d graphics hardware capabilities built right into a general purpose CPU.<br /><br />Now, I'm not a hardware/processor expert by any means, but from what I understand, one of the biggest differences between standard, general purpose CPUs and 3D GPUs today is the ability to use Vectors instead of just the usual integers and floating point numbers. AMD has already included this ability into their upcoming K10 chips, so we'll already be partially there come this fall. Having hardware vector processing in the CPU will also help confirm what Mr. Carmack thinks about PPUs.<br /><br />To sum it up, I think Intel's ideas behind adding MMX to their Pentium Pro and Pentium 2 chips over a decade ago was just a little too far ahead of its time. If things continue to progress the way they have over the last few years, we might see the dedicated graphics processor go the way of the sound card. Sure, there will always be a few people who just have to have a higher end experience, but for the vast majority of people, a future generation of CPU may be able to handle all their processing needs.<br /><br />I could end this article right here, but there's something else to think about if you follow my line of thinking. There are hundreds of processor and chip producing companies...however, it's really only 4 companies that make the bulk of them used in Desktops, Laptops, and non-portable gaming systems: Intel, AMD, IBM and NVidia. Now until recently, that list would have featured 5 companies, but as you should already know, AMD bought out ATI last year. With ATI and NVidia being the only 2 companies that mattered when it came to GPUs, that leaves NVidia alone now as the only strict GPU and chipset producer in this bunch. Intel already has its own line of GPUs too, however most people still find them to be highly inadequate when compared to NVidia and AMD/ATI's offerings. Also, now that AMD and Intel have bumped IBM out of the Desktop/Laptop world, thanks to Apple, they're also in an interesting situation. If I remember correctly, IBM even sells some of its servers featuring AMD chips now, so really for IBM their business model isn't quite so focued on chip production these days. On the flip side, IBM is the sole manufacture of CPUs for all three of the newest game consoles, with AMD/ATI supplying the GPU for two of them, and NVidia supporting just the PS3, which is already shaping up to be a dissappointing failure.<br /><br />My prediction is that NVidia is either going to be bought by Intel or IBM, or they will have to start making their own x86 type general purpose CPUs to stay in the game. If my predictions are right on where AMD may lead the industry by bringing the functionality of the GPU directly into their CPUs, thus killing the need for add-on cards and dedicated GPUs for most people, NVidia will quickly find themselves in trouble with their current market focus. Intel may decide that they'll just continue to evolve their own graphics technology and beef it up to compete with AMD's hybrid platform, and won't need to buy NVidia. Although, I personally think they'd both be better off if they did. Also, I think it's a real long shot that IBM will want to buy NVidia, being that they don't even compete in the mainstream x86 market where NVidia's graphics cards are most commonly used. I suppose the other real long shot could be that AMD ends up buying NVidia too...but I highly doubt it as cool as that thought may be. <br /><br />Ever since I installed my first GeForce card I have been a fan of NVidia's products, so I hope they don't end up finding themselves all alone and closing up shop 10 years from now when the CPU/GPU hybrid becomes the norm. As per usual, only time will tell.Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-39871187389513077902007-07-10T20:58:00.001-05:002007-07-10T21:17:24.585-05:00This simple guide is intended for Ubuntu 7.04 (Feisty) users, but may work for other releases as well.<br /><br />To get moto4lin to work right, you'll also need the p2kmoto package, but for some reason the Ubuntu guys put moto4lin in their repositories, but not p2kmoto. I noticed there's a source package in Gusty (7.10), but no .deb. So, here's how to get it quickly working on your system if you don't want to bother compiling it.<br /><br /> $ sudo apt-get install moto4lin<br /><br />After installing this package you will need to download the .deb for p2kmoto from somewhere. A quick Google search found the following sources for me:<br /><br /> http://members.chello.cz/gliding/p2kmoto_0.1<br /> http://www.timothytuck.com/component/option,com_remository/Itemid,0/func,fileinfo/id,4/<br /> (this last one seemed to stall out for me, but might work for you)<br /><br />Now of course, I can not vouch for either one of these sources, so <span style="font-weight:bold;">download at your own risk!</span><br /><br />Once you have installed both moto4lin and p2kmoto, you (in theory) should be able to just type in:<br /><br /> $ sudo moto4lin<br /><br />However, this did not ever work right for me... Instead I had to run the p2ktest program first, and then moto4lin worked after that. Also note there are ways to change your udev rules so you don't have to run this app as root, but as long as you're careful you should be fine running it with sudo, as listed above.<br /><br />It's still not perfect, and a little slow seeming to me, but it gets the job done...and for free! Those bastards at "the new AT&T" wanted to charge me $50 for a cable and some crappy software CD (most likely Windows only anyway). A handy little $15 multi-tip USB cable set and some good ol' open source software just seemed like the better option to me ;)Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-255490365970930242007-05-15T00:41:00.000-05:002007-05-15T01:31:44.954-05:00Release NamingsThis is part one of two posts I plan on making...although who knows when I'll get around to the massive second part :P<br /><br />I've noticed something in the software world that annoys me... No one can seem to come to a clear definition of what release types mean. What exactly is an "alpha", "beta", "release candidate", etc...?<br /><br />To me these namings have always had a clear cut meaning, why don't they to everyone else? I'm not going to be ridiculous enough to propose that mine are the be all end all and everyone should conform to these standards, but it certainly needs to be discussed and standardized at some point.<br /><br />First off we have the elusive "alpha" release. I've always thought of the alpha stage in development to mean a work-in-progress...ie, new features are still being developed and added. The code base is at a state where you can start testing it to some extent and it's somewhat usable, but there are still new features being added, and the currently implemented ones may very well be completely rewritten depending on how tests and such go. So let's just say alpha means: The software in question is usable, but still under active development, not all features have been implemented yet, and the code is still subject to radical change. If a user is feeling really adventurous they can go ahead and give it a shot, but stability is most certainly not guaranteed.<br /><br />With that definition, let's take a step back to the rarely used "pre-alpha." To me this would mean the same as alpha except it's not even usable yet, and there's no point in trying to test the software as whole yet, although specific classes and functions may be complete. Also with this definition there's no reason to really ever offer up a public release dubbed a pre-alpha. The only way an end user should ever get their hands on pre-alpha code is if they're compiling from CVS/Subversion/etc.<br /><br />Next onto another commonly used term that rarely has the same meaning from project to project; "beta" releases. What defines a beta release seems to have almost no contention between different developers. To me, a beta release means that all features have been implemented and from this point on all subsequent releases will be to fix bugs and tighten up the code. It always drives me crazy when developers release their programs with a so called beta release, yet all the features are not there yet, this just isn't a beta...it's still alpha! Not only should all planned features be implemented upon the first beta release, but there should have been a reasonable amount of testing to make sure there are no major, commonly found bugs in it. A beta doesn't have to be completely stable, but it should certainly be more stable than an alpha. Now of course, this is hard to define as the beta phase is there strictly to find new bugs, but like I said, it should at least be as stable, if not more than the alpha release(s) were. Perhaps another way to look at it is that alphas are almost exclusively for developers to test, but betas are for the end users to start testing. The beta cycle of development should probably be the longest of them all as well. Sometime it takes time to find all the major bugs in your code, and there's no sense in releasing a release candidate until you've had an adequate amount of time and testers to find any show stoppers. So, to sum up my definition of beta: all features have been implemented, some testing has been done, there are no known critical bugs in the code at the time of release.<br /><br />Next we have the "release candidate" or sometimes simply referred to as the "rc" for those who just can't handle typing that much. Not all projects even release a release candidate, but I think it's a good idea. Your release candidate comes after you've been in beta for a decent amount of time and all the known bugs have been found. The main purpose for the release candidate is to grab a larger group of testers than you had for your beta. As some users are too paranoid to run beta software, and many times for good reason, a release candidate is more acceptable to those who need absolute stability. Your release candidate phase doesn't have to last very long, but it should probably be more than a week (I'm looking at you Ubuntu!). Once again you should have zero bug reports open upon release of your first release candidate, as this release should..in theory...be just as stable as your final release. Of course, hopefully a few will be found before you let loose that final release, since that's the whole point of this phase. And if you're saying to yourself "what if my code doesn't have any bugs in it?" you're just kidding yourself, everyones' code has bugs. If no new bugs have been reported since your first release candidate you probably still don't have enough users testing your code.<br /><br />Once the RC has been out there for a while, and you have absolutely no bugs left, then you release your final version. Now of course all development teams will have this version, but not all of them call it the same thing. Some will call it "gamma" as it's the next letter after "beta" in the greek alphabet, and many of the game developers will call it the "gold" release because of the color of the old CD-R masters they used to send off to their replicators. Of course today most of them will use the same silver-ish CD-Rs everyone else does, but the name remains. And for many developers, especially in the open source community, their code never touches a physical disc. So no matter whether you call it gold, or just "final," it's the last release you should ever make for this version number. The only complaint/suggestion I'd have here is don't rush to get to final, there's no shame in having a lengthy beta phase if that means your final is rock solid when it finally hits. Just as with the rest of this, we really should agree on one name that we all use, I prefer the simple "final" myself.<br /><br /><span style="font-style:italic;">So to recap my definitions:</span><br /><br /><span style="font-weight:bold;">Pre-Alpha:</span> code almost unusable, in heavy development, definitely not for end users<br /><br /><span style="font-weight:bold;">Alpha:</span> code still in heavy development, but somewhat usable; not all features have been implemented yet, don't expect stability end users<br /><br /><span style="font-weight:bold;">Beta:</span> all features have been implemented, code is fairly stable, okay for users to test at their own risk<br /><br /><span style="font-weight:bold;">Release Candidate:</span> all known bugs have been irradicated, safe for all to test<br /><br /><span style="font-weight:bold;">Final:</span> super stable, well tested, safe for mission critical use<br /><br /><br />In my next post I'll discuss an even sloppier area: version numbering.Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-63564046218738862112007-04-05T10:28:00.000-05:002007-04-05T10:40:40.369-05:00A quick thoughtLike many other poor souls, I'm still forced to deal with Windows everyday at work. So, I end up using it much more than I would like obviously. Anyway, I just had a thought today and that is: Why hasn't anyone created a package manager for Windows? If I could just do an apt-get install firefox on these Windows machines with all updates being automagically handled by the PM life would be so much simpler. Well, obviously I'm not the first to have this idea as there seems to already be a couple projects under way to do just this. The first one I found is simply called <a href="http://winpackman.org">WinPackMan</a> (or the Windows Package Manager), although I've yet to have a chance to try it yet. It appears to still be in an alpha state at the moment though...<br /><br />If there were a reliable, open-source package manager for Windows, this could really help people transition to the Linux world much easier down the road too. Package installation/management is one the first things new Linux converts will find to complain about when they attempt to make the switch. And it's not really their fault, nor is it the Linux community's fault...we simply do things differently. So, once again... if Windows users started using package managers akin to Apt, Pacman, Yum, Portage (yes Portage can be used for binaries in addition to compiling from source), etc... then things would probably feel alot more natural to them when they make the switch.<br /><br />Perhaps a more modular/extensible package manager like SmartPM needs to be ported over to Windows? I think it would very much be worth it.Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-42305993420269521852007-03-18T19:36:00.000-05:002007-03-18T20:47:04.919-05:00Thinking Out Loud: Episodic GamingIf you spend much time reading all the game sites about new concepts and ideas, probably one of the biggest buzz terms you'll hear is "digital distribution." The ability to provide your content directly to the end user and skip all the middle men is an interesting prospect to many developers, especially the smaller independent developers who's chances of ever getting their games carried in the Walmarts and Best Buys of the world is slim to none. The concept of digital distribution is appealing to many, although most of the publishers and retail distributors are probably scared to death of it. With the cost of development rising it could be a major benefit to the industry as a whole even though it could kill some of the juggernauts that currently run the show. Not only does DD make the thought of self-publishing your own games much more realistic than ever before, but it also offers to the actual gamers the possibility of getting their fix at cheaper prices.<br /><br />Another concept that builds on top of DD is the idea of "episodic content." The idea is that rather than buying a big epic game all at once, the gamer buys it in smaller segments, thus making their gaming experience more akin to a TV show than a movie. Very few companies have actually tried realizing the concept so far, but it's fairly inevitable that it will come to be a normal occurance in the future.<br /><br />Now I will try to offer some of my ideas on how to successfully pull off this idea that I have yet to see fully realized in our industry. Now of course being that I'm a fledgling game designer/developer myself, some might ask why give away these ideas if they could be your own big break? Well, I see it as a much greater benefit to the industry as a whole rather than to keep to myself. The method of delivery isn't nearly as important as the real meat of the game, AKA: content.<br /><br />So, first off, episodic gaming requires your users to have a broadband internet connection. This is a little bit of a problem as still not everyone has one. In fact, according to recent surveys and statistics you'll find that in the United States (where I reside), only about half of Americans have broadband at their homes. In fact, from my own personal experience, it's even less than that if your in the rural south (also where I currently reside). Since America is so spread out, the further away you live from a major city the less your chances are of having anything better than dial-up. So this presents a major bottleneck in the concept...but, not as huge as it may sound at first. Just as with regular television shows, you can offer offline versions of your episodic content if you're popular enough, and have the distribution channels to support you. Once you finish a "season" in your game, you can bundle it all up on a DVD or 2 and sell it through traditional retail means (once again, if your game is popular enough to have retail distributors behind it). So, this means, that some smaller companies might find themselves online-only for a couple years, but after they're audience hits a certain point, they might find the big boys come beg them to let them publish the offline version. Sound like delusions of grandeur? Perhaps, but I'm merely trying to get an idea across...I'm not saying it will be easy nor happen to many.<br /><br />So, where do we begin.... first off, let me say that if you want to make any money off your game you have to give it away for free. Sound crazy? It's not... In the television industry all new shows start off with a "pilot" episode. Then in the game industry many games will offer a "demo." What I'm proposing here is simply to combine the two. It's not that new a concept either, it's what made shareware titles like Doom take off to where they are today. So, here's what you do: Offer up the entire first episode of your game 100% free to everyone. You build a digital distribution/update service into that initial release, and so when you release episode two, the gamer simply starts up the same game they've already downloaded for free, and do all their purchasing and downloading from within it. Now of course, there might be a third party mechanism you want to use, like Steam for instance. That's fine, but once again... I say give away your first episode for free. Give the gamers something to play. Let them see just how good your series will be.<br /><br />Now of course, this puts alot of weight on your first episode. If you're giving it out for free, and no one likes it, you're going to have a hell of time selling them your next one. Sorry, that's the price of such a service. And speaking of price, if you're a small company trying to do this all by yourself, the bandwidth to host that first free episode is going to be monstrous. Of course, you could always look for investors/partners to help carry that load, but then that means you have to give them a cut of your profits once you start making a return on your investment...and you may not want to get into that situation if you can help it. There's also things like bitTorrent to help lighten the load, but unfortunately alot of ISPs (especially those offered on college campuses) are trying to block it out of existence for potential piracy uses.<br /><br />Then of course there's always advertising, whether it simply be on your site, perhaps displayed during the download process, or actually in your game. All I have to say about that last one is be careful, gamers will put up with it to a certain extent, but if you over do it or do it wrong (like putting a big Mountain Dew billboard in the middle of an ancient medieval world), there will be a backlash. Also it's important to remember that since you intend on selling subsequent episodes after the first one, people will probably feel pretty angry with you if they are paying for it and seeing ads at the same time. Once again, there's a small window where you can get away with it, but eventually your users are going to catch on. Be careful...<br /><br />Another good idea may be to go ahead and prepare the first 2 or 3 episodes before you ever release that first one for free. That way, if people like they can go ahead and download the next episode or two right away, and you can start paying off that bandwidth debt right away too. Some may even feel compelled to finish the entire season before "airing" the first episode, but once again...this is somewhere you need to be careful on. If your audience realizes that you've completed the entire game and are simply slipping it out bit by bit to them simply so you can charge them more in the long run than you would have to have sold it all at once, they're not gonna be happy.<br /><br />And now we come to the next aspect you need to consider... price. Luckily digital distribution offers you a beautiful thing called scalability. The bigger your audience, the more money you make and thus you can afford bigger bandwidth too. I would suggest to you to sell your episodes as cheaply as you can afford to. The cheaper they are, the more likely someone will be willing to pay for them, and thus you can potentially make a lot more money by selling them at a lower cost. Then some may choose to sell their new episodes at a higher price point the first week or two of their release, and then lower the price gradually over time. For example, you may charge $20 for your episode at first, then a few weeks later, you drop it to $10, then later on down the road, you drop it to $5. Now, of course that could just as easily be 5 then 2 then 1 depending on your business model.<br /><br />When deciding how much to charge for each episode, you need to really look at how big each one will be, and how many you plan on releasing over the course of your season. If each episode only consists of a single level that will average out to about an hour or two of gameplay, but you plan on releasing 20 of them, I'd suggest selling them for about $5-10 if not less even. If you plan on releasing 5 or 6 episodes with about 6-8 hours of gameplay each, you'd probably want to charge around $10-20. These are all factors you need to look at when deciding all these things.<br /><br />Lastly there are a few more small things to take into account. Do you even want to have seasons? Perhaps you just want to start with episode one and never stop until your ready to end the series. If you do go with a season model, perhaps you should give away the first episode of each subsequent season for free just as you did with the first one in case someone wants to start from there? Do you want to let your audience pick and choose which episodes they buy, or do you force them to own all the prior episodes first? (ie...you can't buy episode 4 unless you already have played through and completed 1,2 and 3) These are all tough, and very important questions you need to consider. Since you're giving away the first episode for free and merely selling the content of the subsequent episodes, perhaps it would be more beneficial for you to use an open source model with your actual game engine and digital distribution service? Maybe you might even want to build a general purpose engine that multiple "shows" can be purchased through, rather than just your own? There all kinds of options out there for you, and even more factors than I have covered here. Episodic gaming offers a potentially, very compelling experience for gamers and developers alike.....but when will anyone be ready to really pull it off? Are you?Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-63304407521341762542007-03-11T17:05:00.000-05:002007-03-11T17:09:38.129-05:00Killer GamesThroughout recent history, every time a new line of electronic products are released, they don't really take off and become mainstream until they find their "killer app." Usually some form of content that you can only experience through this new medium, although not always a type of entertainment. Back in the 1980s the spreadsheet was supposedly the killer app for the PC, and it was for many businesses. However, the PC did not find it's true killer app for home use until the world first experienced the world wide web. In the late 1990s the DVD format was taking off very slowly until the Matrix came out. After that, anyone and everyone had a DVD player, and you almost always found a copy of the Matrix on DVD in their collection. Now of course, once that killer app has been found and had time to thoroughly saturate the market, the technology becomes common place and then the killer app is no longer essential even though it once was. It is said that Nirvana's "Nevermind" was the killer app for the CD player. Apple's computer have been mildly popular for decades, but it doesn't seem they really started taking off till they found their killer app in the iPod. And if we want to take the concept way back, there was the Christian Bible for books made from a printing press. Yet as important as all these were, I'm more interested in games for this little rambling session.<br /><br />I'm not really sure what the killer app for the Atari was, I guess it was a little before my time. That and I've still yet to hear a definitive answer. Some would try to argue Pong or the awful port of PacMan that was released for it, but I'm still not so sure from the mixed reports I've heard. When the Nintendo Entertainment System dropped in on the USA in 1985, it came preloaded with it's killer app: Super Mario Bros. Sure there were many other important titles in that generation, but Mario made the NES and the game industry what it is today. Mario 1 (as some like to call it, even though not the most accurate title nor number), was a must have game. When someone talked about wanting to get a NES, it was a safe assumption that they wanted this game. Even though many other classic NES games may not be in the same genre or anything, Mario set the tone for that generation. And not only that, but Nintendo included it with your system by default. This was pure genius. They did the same thing with the GameBoy a few years later. Tetris, did not always come with a new GB when you bought it, but it often did, and it was certainly the killer app that got that handheld system rolling.<br /><br />As we go into the 16-bit generation things aren't quite as clear. I would say that Sonic the Hedgehog, along with its subsequent sequels were the killer app for the Sega Genesis. However, it could also be argued that Mortal Kombat was the killer app when it came to the Genesis with all it's gore and fatalities fully intact, as while the SNES had a bloodless, neutered version for it's home users. The Super NES would be pretty clearly marked for Super Mario World this time around (once again included by default when you purchased a new system).<br /><br />As the game industry began to transition into 3D games with the so-called 32-bit era, the Sega Saturn never seemed to find it's killer app, at least not with American audiences (which of course I'm more familiar with being that I live here). The new player in the console biz at the time, Sony, with it's PlayStation would not really take off until the release of Final Fantasy 7. Sure in the scope of things, the Madden football series would easily out weight the FFs in sales numbers, but FF7 was the one for most people to make them say, "I've got to have one of these." The Nintendo64 would come out a year later than the other two with Super Mario 64 included by default as Nintendo tried to make it 3 for 3. Unfortunately as innovative as Mario64 was, it just didn't have the same fun factor as its 2D predecessors. The N64 would not find it's killer app until a couple years later with The Legend of Zelda: Ocarina of Time was released for it. Zelda had always been an extremely popular game on prior Nintendo systems, but this time it got to take the spotlight away from the long running front runner Mario.<br /><br />As the next generation came about, Sega would release their very last console, The Dreamcast. Sadly, just as with the Saturn in the prior generation, the DC would never truly find its own killer app, or at least not in time. Sonic's transition into three dimensions was even more coldly received than Mario's had been in the last gen. Innovative and quirky games like Jet Set Radio would make small dents as well, but not enough to really matter. Sega's greatest attempt would be the epic release of the Shenmue series. At the time it was the most expensive game ever made with an unheard of 5 years in development. Once again the series would surprisingly fall flat. It's sequel, Shenmue 2, would not even receive an American release as the first one had sold so poorly here. Twice in a row Sega had failed to find their killer app and it was too late to try and continue.<br /><br />Oddly enough, when the PlayStation 2 was released, it became an instant hit even with no killer app. In this generation, the PlayStation brand name would be all the killer app Sony needed to crush it's competition a second time around. However, even with such a strong fan base, the PS2 would eventually need a real game to hold the crown and almost two years later it would find it with Grand Theft Auto 3. GTA3 would be the system seller, even though oddly once again, the PS2's sales numbers had already marked it as a success with no really worthwhile games to show for it. For Japanese consumers, many of it's initial purchasers bought it for its cheap DVD player functionality and would not buy an actual game for it for some time.<br /><br />Microsoft would now make it's first attempt into the console gaming realm with the Xbox, and quickly found its killer app in Halo. Halo and it's sequel, Halo 2 would become the best selling games to that time, but still would not be enough to take the PS2's crown. In fact even with record breaking sales numbers for the Halo series, the Xbox would barely sell any systems at all in Japan.<br /><br />Lastly Nintendo would try to regain the supremacy it once had with the GameCube. However, this would mark the first time that Nintendo did not include a Mario game with it's system at launch. Sure, there was Mario Sunshine available at launch, but it was not actually included with the system as were its predecessors. Not only that, but the sales of Mario Sunshine would be almost as abysmal as the Xbox's sales in Japan. Mario had lost his magic touch and was no longer even remotely considerable as a killer app. Next Nintendo would attempt to bring their last gen champion in, but The Legend of Zelda: The Wind Waker would have fairly mediocre sales as well. In the end, Super Smash Bros: Melee would be the closest thing to a killer app for the GameCube, but it would be hard to call it a system seller.<br /><br />Now with the history behind us, let's look at the new generation of systems. First up is the Xbox 360. Released in the fall of 2005, almost a year and a half later there is still no killer app. Yes, Gears of War was immensely successful and has already out sold both Halo and Halo 2, numbers wise. Yet, it still does not seem to be strong enough a title to be the system selling killer app MS needs. In Japan, MS was hoping that the recently released Blue Dragon would become the killer app there. However, even though the title drove Xbox 360 sales well over the combined sales of the first system, it still has a pretty weak market share there.<br /><br />The PlayStation3, although just released about 4 months ago, does not seem to be doing so well. Sony had hoped its built-in BluRay player capabilities would be its killer app, but unlike with DVD in the prior generation, BluRay has yet to become a proven format. As it appears its brand name will not be enough to drive the system's sales this go 'round, Sony needs a killer app and soon if it doesn't want to go the way of Sega. Sadly for them, there do not seem to be any upcoming games that appear like they will be able to do the trick any time soon.<br /><br />And finally there is Nintendo's Wii. It seems after two generations of failure, Nintendo may be poised to take back their old spot at the top. However, the Wii is in an odd situation itself. The Wii is already immensely popular even with those who do not traditionally play videogames, yet it like the PS2 before it, does not seem to have any one game ready to become its killer app. Nintendo's latest iteration in the Zelda series seems popular enough, but it's still no killer app. And its tacked on Wii-mote functionality is not enough to push it to system seller status since you can have almost the same gaming experience from the GameCube version. No, what's selling the systems is the revolutionary Wii-mote itself. There are lots of games that show off its potential, but it still appears it might be a while before any developers actually realize its full potential. Yet that potential, along with the simple yet fun WiiSports package that comes with ever Wii sold seem to be enough to keep gamers guzzling down the machines just as quickly as Nintendo can produce them. Yet just like the PS2 again, it will eventually need an actual game to take over as its killer app. If and when that happens is still unknown, and some fear that if it doesn't happen before this Christmas Nintendo may find them with a few million disappointed and angry Wii owners.<br /><br />Now, of course it's no secret I have <a href="http://www.opengameconsole.org">my own console aspirations</a> for this generation and potentially the next, but I realize that no matter how novel the open console format may be, it'll take an exclusive killer app to make it really happen.<br /><br />Interestingly enough, I'd like to finish up by discussing HD-DVD, BluRay and HDTV in general. HDTV and HD formats are inevitable and have slowly filtered into US homes. However, they don't seem to be just exploding, nor is there any clear victor in the HD disc wars. This, once again is because there's still yet to be any HD killer app. There is no movie, TV show or videogame that's making consumers say "I've got to have one." What will it be? Who knows.... but it's bound to be only a matter of time, and I don't know about you but I can't wait to experience it ;)Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-46130297001549674072006-11-22T11:39:00.000-06:002006-11-22T12:59:42.150-06:00Internet Video (youtube and beyond)So I just read an <a href="http://wired.com/wired/archive/14.12/youtube.html">interesting article over at Wired</a> about YouTube/Google trying to figure out just how to adapt their current systems to allow for advertising, how to handle it, and what to do with the profits.<br /><br />It's a very interesting topic. I've also questioned just how sites like YouTube were able to make any money off their service...I don't think I've ever noticed a single add on their site, and certainly not on embedded YouTube videos found on other sites... Looking at YouTube now, I notice there are banner ads above the videos now, though I could swear those haven't always been there. Double checking with GoogleVideo (youTube's new owner for those who've been living under rocks), there are still no ads to be see that I can find.<br /><br />Even though the vast majority content found on these, and similar sites are given to them at zero charge, bandwidth and server space isn't cheap. Neither is your IT, development and maintenance crews. In the case of Google, they've obviously got plenty of money, but up until their recent purchase, was YouTube really capable of supporting the millions upon millions of hits they recieved each day off of simple banner ads?<br /><br />The major advertising industry wants in. They know that traditional TV is a sinking ship and it's only a matter of time before it all but completely dies. The question on everyone's minds, apparently, is how to ad advertisements to services like YouTube without ruining the user experience that's made it so popular. I've got a couple ideas...as per usual ;)<br /><br />One idea discussed in the article is either putting ads at the beginning or end of the videos. Apparently it's widely accepted that pre-ads are a bad idea and no one likes them. The problem with post-ads is that there's a very likely chance the user will just stop the video after it's complete and not watch the ad following. Sites such as iFilm and GameSpot have been putting pre-ads before their free videos, and personally, I've never really minded them so much. The only time they really were ever annoying was when I would go to watch 10 different videos and all of them had the exact same ad, or sometimes sites will have poorly written playlists that play the pre-ad, but then stop and never play the actual video....now that's annoying! However as most of the new video sites are all flash powered, I don't think the latter problem will come up too often any more.<br /><br />As YouTube thrives on having their videos embedded into numerous other sites, the ads that surround their video rarely get seen...although they do have kind of a tricky system where if you want to full screen the video you have to click the embedded video which takes you to the main youTube site, and then there's a couple seconds where you ...might... accidentally glance at an add before you can hit that full screen button. Along with those industrious few of us who take the time to view the pages' source and just put the direct *.swf in our address bar, Adobe is apparently planning on adding a built in, real, fullscreen feature in the next release of Flash... so there goes that...and really, that was kind of under handed I think, to begin with....if it was intentional.<br /><br />Google Video has attempted many other things, such as having a pay-only digital distribution service for certain videos...and potentially that could make a little money, if anyone ever actually used it. Another feature, many of the major videogame sites use is to allow free, low bandwidth/quality streaming and then require a paid subscription to see the high quality videos. All of this brings up many questions of owner ship, especially when subjects such as DRM get brought into the equation. Is the customer merely renting these videos, or are they actually buying a copy of them. Sure Google's proprietary, DRM-ified version of VLC they ship will let you want purchased videos any time you want, but will those videos transfer to your other machines? Can you watch it under Linux? On your video iPod? no... Sure, they could strictly offer download only services, but high quality standard definition video can take quite a while to download even with a decent broadband connection....just imagine how long it would take to have to redownload/stream every HD video you bought from them... It's not feasable.<br /><br />This also brings up another issue... I've never posted anything to YouTube myself, so I don't know the specifics, but it would appear they have a limit on resolution and bitrate your video can be posted in, who knows they may do all the transcoding for you automagically. Would a site like YouTube benefit from offering higher quality videos to paid subscribers like the gaming site do? Would it perhaps be considered kosher for them to offer ad-less low quality videos, but have high quality version that have both pre and post ads tacked on? I personally would be fine with that last option. As long as the ads are 30 seconds or less, I could really care less myself. And I think most would probably agree.<br /><br />There's one more option out there, and that's basically virtual channels. Basically a never ending playlist that would more closely represent traditional TV channels. Basically what you would do is have users or some algorithm group similar videos in a channel. A use could either watch a specific video or just hit random video for that channel, then you would automatically go on to watch another video from that channel after that one finished, and then another and so on. Then random ads could be placed between every so many videos. The closest thing to this idea I've ever seen was the old Yahoo Music Video service (aka Launch)...but I haven't watched that in years since there's still no Linux support. You tube kind of does something like this too I suppose by offereing up suggestions of similar videos afterward, but it requires user interaction, which some may not want to bother with.<br /><br />If I were Google, I'd probably do a combination of the above. I'd leave regular YouTube usage the exact same way it is now, with the same quality videos. All embedded videos stay as they are, however, if you goto the main site you have the option to watch high quality videos. There you can either opt to have ads tacked on them, or pay a subscription fee to go ad-less. In addition to this offer the virtual TV channel service and you're good to go.<br /><br />So, now that we have a good source of profit coming in, there comes the need to discuss who gets a cut of that. Google has already started offereing profit sharing with their highest watched videos, which is certainly a step in the right direction. The other problem is that almost every YouTube video out there infringes upon someone's copyright somehow. Just about every video is either going to be a straight rip from a TV show, or a home video with multiple copywritten pieces of music thrown in the mix. And as per usual, everyone wants their royalities. There are a few different ways to go about this. Either Google can sign a massive contract with the big record labels and TV/movie studios that does blanket coverage for any of their content that ends up on the site, or Google can get strict and delete any videos that the author did not get proper concent ahead of time for. The blanket coverage would probably be the easiest...but Google's still reluctant. Why? Well, even though copyright is fairly clear cut in the law books, it's not in the mind of the people. Once a piece of content become wide spread enough, many would argue it partially belongs to everyone. The fans take personal stake in franchises. Combine that with how the average person feels the various industries charge too much for their products, and the situation is no longer so simple.<br /><br />In this situation, I'm afraid I'm gonna have to say it's better for Google to do the blanket contracts. There are so many videos posted, that it's near impossible to moderate them all, and good luck getting your users to rat out their fellow posters. Really, since Google is just a business, it's not really their place to worry with if copyright laws need to be changed, that's up to the people. And if they're all too lazy and/or apathetic, then they'll just have to deal with the consequences.<br /><br />However, in light of that it seems that sites like DeviantArt and the ever popular mySpace have an easier time dealing with the issue. On DeviantArt, users are much more likely to report copywritten material, as most of the users take the art they post very seriously. MySpace, now owned by one of the large content producing conglomerations around, News Corp, is very thorough about making sure no one is posting music and videos that they don't have the proper copyright for. I think the number one reason they've been successful here is because so many bands have their own accounts and post their music already. If there wasn't such great support from the bands and their labels to begin with, I'm sure it would be much more rampant.<br /><br />It's all very interesting to consider. If Google doesn't make the right choices now, they're 1.6 billion dollar purchase may become a major mistake, and someone who gets it right will even surpass the seemingly unstoppable YouTube.Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-1162200099362017422006-10-30T02:34:00.000-06:002006-11-22T13:25:11.002-06:00Open Source vs. Proprietary Software, and ESRIt's funny, about a year ago, I wrote <a href="http://zephyrxero.blogspot.com/2005/07/just-how-free-should-your-software-be.html">a rather lengthy article</a> about how annoyed I was that Eric Raymond, didn't think GPL-style licensing was necessary anymore in favor of more BSD-style, public domain type licensing... and now I find myself seeing his latest statements to be some of the few reasonable opinions out there. Don't get me wrong, I still stand by the fact that GPL and LGPL type licensing is still very much necessary, however now there are new topics to discuss.<br /><br /><a href="http://www.redherring.com/Article.aspx?a=18176&hed=Linux+Desktop+Window+Closing%3F+">His statements lately</a> are that the Linux world needs to wake up and realize that as great as open source development is, we can't pretend like proprietary software doesn't exist. Unfortunately, at the moment, we still need quite a few components in our Linux distros, that have no viable open alternatives currently.<br /><br />This is the logical middle ground that many Linux users agree with, but few mention because as in all things, it's the extremists who always talk the loudest.<br /><br />Let me just first say that I love open source and everything it stands for. I wish all the software out there was open source, however that's just not the way it is. One thing to remember is that one of the biggest advantages in the open source world is the concept of choice. However, many forget that choice also entails that people can choose to release proprietary software too. For now, a mixture is the way to go. Now, most of this going to be geared towards GNU/Linux based operating systems, but it can apply in other areas as well.<br /><br />The Linux community says it wants the common computer user to start using it and other open source software, but their not willing to make the compromises necessary to make it happen. Groups like the Free Software Foundation and Debian have the mindset that you should refuse to use anything that's not based off open source/free software, but that's just not feasable at the moment. Linux can be a very easy to use, friendly, yet very powerful operating system. But, a large portion of the content that people want to use their computers for are based off of closed/proprietary components. There are thousands of audio and video files out there that are only available in a proprietary codec such as Mp3, WindowsMedia or Quicktime. And if you're running Linux you have two options. Either A. use a reverse-engineered codec that might work sometimes, might not others, and almost never at the same quality as the original proprietary one, or B. use the proprietary codecs. For most, the choice is simple... either stay absolutist and for go most of the multimedia content available to you, or enjoy your content and get over it. As the people willing to boycott this content is an extremely small minority, this mindset accomplishes nothing. If there were a considerable number of users willing to do without; say about 15% or more of the entire desktop PC user market, then it might inspire change but right now, it's hurting our cause overall more than it is helping.<br /><br />When we try to convince current Windows or MacOS users to switch to Linux or any other open source operating system...they're going to want to know that they can continue to watch all their videos and listen to all of their music they currently enjoy. When we have to stop and explain the current complex situations our stubbornness is causing, it makes them question the possibility of switching even more than they would have before.<br /><br />The same thing goes for certain hardware drivers. If someone has an nVidia or ATI video card, and they want any decent 3D hardware support, then their only current feasible option is to use the proprietary codecs. However, once again, many distributions do not install them by default, thus making a new user feel like Linux is incapable of the same 3D graphical performance they've experienced on their machine in Windows.<br /><br />These codecs are interesting though in that the manufacturers have gone out of their way to make sure they work on Linux systems and integrate with the kernel properly. These codecs may be proprietary, but they are freely redistributable, so there's nothing holding distros from releasing them with their systems by default. In the case of video codecs it is not so simple. Most of the solutions available to play proprietary multimedia codecs under Linux are in questionable grey areas as far as legality go, so of course no one would want to get into legal troubles just for these codecs. However, there is still another option, the companies that make these codecs are all willing to make them legally available if the distro developers were willing to make the proper arrangements.<br /><br />A company named Fluendo recently licensed the rights to release a free, legal codec for Mpeg/Mp3 multimedia files for any Linux distro. Any distro that wants to include Mp3 support out of the box now can, yet there are still some who won't because it's a proprietary codec. A few distros like Linspire/<a href="http://freespire.org/">Freespire</a> are taking a proactive route on this issue though. They have gone the extra mile to make sure they can legally include support for things like Quicktime, WindowsMedia files and encrypted DVDs. They also include all the proprietary drivers, as well as Java and Flash. Flash, once again...while proprietary is freely distributable, but many distros still choose not to.<br /><br />Java is a very interesting case as Sun has announced that they plan to make it open source within the course of the next year. This is good because there is a large number of programs developed in it, and things like GCJ just don't cut it when it comes to speed and quality.<br /><br />It's not surprising to me that <a href="http://wiki.freespire.org/index.php/Freespire_Leadership_Board_Home">E.S.R. is an advisor for Freespire</a> now with their pragmatic middle ground stance. I for one fully intend on giving it a shot in the near future. I've always heard nothing but negative comments from people about Linspire (once called Lindows) from all of the other Linux users I know, but I think it's time to find out for myself. With this realization and the fact they now have a community driven, free edition I think it's time. However, since they've only released an initial offering that doesn't vary much from prior versions of Linspire, I think I will wait for their upcoming 2.0 release this spring.<br /><br />For now I'll stick with Ubuntu, and continue to jump through hoops just so I can use my computer the way I want to... If only they would take a similar approach maybe they'd be able to make greater strides against their so called <a href="https://launchpad.net/distros/ubuntu/+bug/1">"Bug #1"</a>.<br /><br /><br /><span style="font-style: italic;">Update:</span> I just found an Ubuntu derivative called "<a href="http://linuxmint.com">Linux Mint</a>", which is basically just Ubuntu plus Flash, Sun Java, and all the proprietary codecs (including DVD) built-in by default. This seems great, but most likely it's still in that questionable legal grey area and just being hosted from a country that doesn't care about things as much as places like the US do...Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0tag:blogger.com,1999:blog-14730839.post-1155188533317679682006-08-10T00:29:00.000-05:002006-08-10T00:44:01.706-05:00Digg AnnoyancesSo everyone's claiming Digg is going to overtake Slashdot, but I'm not so convinced... I've read it occationaly for the last couple months. Then the other day I noticed them promoting some new features, and upon clicking the link I was welcomed with a "You must upgrade Flash" notice as I keep seeing all over the web. The problem is that Adobe has still yet to release any version of Flash beyone version 7 for Linux users. So, basically Digg has made it clear to me that Linux/Unix/BSD users are pretty low on their priority list. I'm tired of being spit in the face of by all these sites, and I expected better from the so called tech site that is Digg.com.<br /><br />Then today, strike 2 happens. I noticed an aricle talking about how <a href="http://digg.com/hardware/AMD_considering_open_sourcing_ATI_drivers">AMD is considering open sourcing their ATI drivers</a> and that there were no articles about it on Digg yet, so I made my first submission. Well, oddly the "diggs" came alot slower than I expected for such an important topic, and when I came back to check if it had gotten enough votes to show up on the front page I found that it had, BUT it had been marked as a duplicate by many users... some even felt the need to tell me in the comments, and then one of them put a link to the supposed post that I duplicated. Then I quickly saw that the article with much more diggs was posted over an hour after mine was, yet somehow they shot up and now I was being marked as a duplicate.<br /><br />Sure, this is something pretty silly to get upset over, but it's also very disheartening to have this happen on my very first submission. Being the guy that I am, I quickly came up with a simple solution. Whenever someone marks an article as a duplicate they should also be required to provide the URL of the article that has been supposedly duplicated. Then a simple script can check the times they were posted and determine who the real dupe is. Then I guess they can even take it one step forward and place all the duplicate reports on the actual dupe instead of the real original. It sounds like a lot of trouble for something so silly, but it would have meant a lot to me if my first post wasn't treated so poorly by something that could be so easily corrected :(<br /><br />I've submitted a bug report for both of these issues, but I'm not holding my breath that they'll actually do anything... so for now, I'll stick with <a href="http://slashdot.org">Slashdot</a> ;)Anonymoushttp://www.blogger.com/profile/11709921573280646344noreply@blogger.com0