Category Archives: Technology

Brief summary of wassup in the tech world…

Everyday after I settle myself down on my medicine ball eating my happy sausage & egg white on a whole wheat toast breakfast, I start reading tech news. It’s important, very. Hell, it’s vital I’d say. By tech news I mean ranging from gadget sites like Gizmodo & Engadget, to Ars Technica, TechCrunch, Mashable and Hacker News. HN is still so far the best one cause the top page contents are generally filtered by rather tech-savvy members.

A lot of articles on HN are very specialized and specific to certain topics. It can range from assessment of certain techniques and patterns, to new framework, library and even micro-optimization in bitwise operations. I’ve learned a ton from those. Basically this blog post is to highlight some of the things I thought was cool, broken down to hopefully non-technical human-readable pieces 🙂

1.  Egor Homakov hacked GitHub



So what is GitHub? For non-geeks, GitHub is a social coding sites. Instead of sharing life dramas, you share codes and cool projects you’ve been working on. It’s incredibly popular and if you’re a developer without a GitHub/BitBucket account, it’s like a designer without a portfolio. Having cool projects on GitHub (cool is defined by the number of forks, a.k.a how many people develop things based on your code, and # of watchers, a.k.a how many people care about your projects) can easily get you a job at any tech company, cause it actually demonstrates your ability to produce good maintainable code, which is the unit of work developers produce.

So GitHub is like Facebook for coder and it’s built on top of a framework called Ruby on Rails (actually just Rails, Ruby is the language). In designing a framework it’s always tricky to measure the amount of customizability you want to offer. There’s no one size fit all, really. And from what I’ve read, there’s been a debate on how Rails enforces whitelisting, or security in general. How much security does a developer want? It’s always been a tough question.

What Egor found has been a know vulnerability in Rails. He posted an issue which got ignored, thus led to his demo on the master branch of Rails project itself. GitHub disabled his account while they were trying to patch it, then enabled it later on, which was ok. You can read more about this right here: Hacker commandeers GitHub to prove Rails vulnerability

2. Sabu, leader of LulzSec got arrested

So Sabu is the leader of LulzSec, which got merged into Anonymous earlier. The group has been conducting a series of DDoS attacks against banks and government websites to protest or sometimes, revenge for certain characters such as the soldier got arrested for leaking to WikiLeaks.

Sabu betrayed Anonymous

First of all, what is DDoS? DDoS is Distributed Denial of Service attack. The key is distributed. A DoS attack means someone floods your website/server with tons of requests (by tons I mean it can go up to billions and more). Since those requests occupy a large chunk of your server’s capacity to serve other users, it will eventually got overwhelmed and shut down. If you bounce/restart it, same thing happens.

Such attack coming from 1 machine is easily blocked due to unique IP address. You can simply blacklist such IP. A distributed one is much harder since it comes from multiple IP addresses running malicious software without users knowing. Such machines can be called zombies.

Now an organization can function the very same way. A normal company today has the board of directors, CEO, CTO, Cxx and such which are decision makers for the company. This can lead to what is called “single point of failure”, which means if the top tier is gone, the company collapses. LulzSec was like that, with a leader, Sabu. Sabu got arrested and LulzSec is gone.

Anonymous, however, isn’t. It is a “distributed” organization meaning there’s no single point of failure. Each subgroup, or even person, functions on his own will to serve the organization’s philosophy, which can be interpreted in any way one can. Therefore, with LulzSec gone, Anonymous isn’t guaranteed to be weaker, since LulzSec might function by itself, still following the philosophy and is in charge of its own operation.

In the computer world this allows an infrastructure to be scaled horizontally by replicating and synchronizing redundant data source. However, humans clearly cannot (yet) replicate ourselves to such an extent that we can backup ourselves to the cloud and such. Anyway, you guys can read the article right here: Sabu betrayed Anonymous

Just some random thoughts… 🙂

Tagged , , , , , , , ,

What stuff I’ve been working on lately

So it’s been a while since my last blog post, partially because I was swamped with work and some personal business, but mainly because my friend Haruki and I are still trying to design and set up the 1st phase of the MetaDB project. It is currently being actively developed right here on GitHub. We’ve put quite some thoughts into the data model of the project, which lead to several architecture and technology choices.

We’re still using NodeJS as the meat of the whole project. It has been the initial choice since the beginning due to various (unverified) reasons of speed, both in performance and development, scalability and lightweightness. I said unverified cause we’ve read a lot about it but technologies are sometimes YMMV kinda thing and for our specific use cases, NodeJS seems to be a good fit.



We dropped the idea of using NoSQL to using PostgreSQL as the database technology. NoSQL is great for unstructured data but there seems to be way too many relationships among our model objects that maintaining a NoSQL model and doing map-reduce just doesn’t seem to be worth it. For NoSQL if we separate some the components then hybrid objects would be a result of a map-reduce instead of a join, which turns out to be definitely not as efficient. If we store those components inside a certain object collection, the reusability would be a big mess. In the end, partially due to our lack of a solid NoSQL modeling skills, we took the easy way out which is SQL.

The framework that drives our API would be a custom in-house developed module called njrpc (Node-JsonRPC). It’s an implementation of the Json-RPC 2.0 protocol with some additional bells & whistles that allow you to do namespacing and interceptors of requests. It also exposes enough low-level calls (at least to our needs) that you can do manual response override in callbacks and such.

Part of my purpose for this blog post is also to share our development set up, mainly for experiments with distributed system and workflows.

Prod environment:

1. For our production environment, Haruki & I both have our VPSes, each of which has an instance of PSQL running and replicating master-slave. Configuring this took a while which we’ll document later but basically it’s running right now.

2. Each of the boxes would also have an instance of metadb-core serving API calls. A simple load-balancer (HAProxy for ex) will be placed on 1 box and will serve as the entry point of all API calls. This does produce an somewhat unpredictable response time for the calls but the tradeoff is redundancy, which is definitely needed for prod env.

3. UI will be set up on 1 of the boxes. It’s really lightweight right now so we don’t immediately see the need of having 2 UIs running.

4. We still have to set up database backup and archive, u know, disaster recovery stuff.

Test environment:

We’re planning to get another smaller VPS instance for our CI server (Continuous Integration). This pretty serves as the integration testing environment for both metadb-core & metadb-ui. Although njrpc is currently set up in Travis-CI, using a 3rd-party CI doesn’t allow us to do some customization and setup. Travis-CI uses RoR and allows testing of NodeJS projects but there’s version skew and database setup and all that. It’d be much less painful to have our own box dedicated to testing.

Development environment:

1. IDE: I actually use cloud9 running locally as my IDE. It doesn’t have code auto-complete and stuff but the syntax hi-lighting and JsHint are pretty decent and helpful. Interface is simple and lightweight enough.

2. Dev environment is pretty much a replicated of prod/test env so it definitely needs PSQL and NodeJS. We also maintain a separate test database with a much smaller set of data so that we can easily wipe out and dump it back in for a fresh copy.

That’s pretty what we have in mind… so much for 2 developers. It’s very time-consuming but rewarding at the same time as I’m getting much better at handling async stuff in JS.

Aight guys, have fun and keep on brogramming! Oh BTW we got our VPS from AlienVPS. They have pretty decent pricing.

Tagged , , , , ,

How to write NodeJS Unit Test

I wasn’t a big fan of unit tests during college since the project scope are so tiny that writing them became really dumb and repetitive. But now that the requirements and logic for my projects have become a lot more complex, unit test is actually really useful. NodeJS itself has an assert module that is used to write unit test. The reason I’m writing this post is that doing some googling didn’t help me much when I first started writing those. Some of the articles were also outdated. I’m sure this one will also but I’ll try to keep it updated.

So u wanna test your server? Here’s how to do it. The code below was used to unit test my JSON-RPC server. I started with importing the modules:

var http = require('http');
var assert = require('assert');
var jrpcs = require('./jrpc');
var options = {
   host: 'localhost',
   port: 3000,
   method : 'POST'

In order to test my server, I have to fire it up:

var server = http.createServer(function(req, res) {
   jrpcs.handle(req, res);

Now the tricky part is here: Since everything is callback, invoking tests after calling createServer doesn’t guarantee the server is initialized. The trick is to put test request inside listen:

server.listen(3000, 'localhost', function() {
   var req = http.request(options, function(res) {
      console.log('Test empty body POST request');

So that guarantee that when I fire my test request, server is definitely up. Once I got my response and check everything, I close my server. I do that right away cause I only had 1 tests but if you have have more than 1, this has to be coordinated.

Anw, just a quick blog post on how to write unit test in NodeJS.

Writing my own JSON-RPC server in NodeJS

So summer is over (I know, like it matters for working people… but it does) and all of my friends who were interning in NYC are gone. Sounds pretty depressing but the good thing about it is that now I got some time to work on my side projects. 1 of those is my implementation of the JSON-RPC protocol in NodeJS. The source code can be found Therefore, this blog post would sorta serve the purpose of what I found out while writing this module (It’s not finished yet!!!). Oh the JSON-RPC spec can be found here:

So my goals for writing this module is to sorta create a framework for MetaDB, another project my friend and I have been working on. Thus, although I tried to keep it as generic as I can, there’re some design patterns that utilizes my server and allows it to (in the future) grab metadata from service classes themselves and do some magic (whether it’s better error feedback or introspection or authentication, I’m still working on it). I also found quite a few other implementations out there, including 1 in Connect but it doesn’t seem to do some of the stuff I want it to do.

For example, I want my server to have to ability to automatically register all public methods in a module, which allows namespacing when I write my service handlers. The other thing is Introspection, which proves to be pretty challenging. Streaming and authentication would also be on my list and those are a bit tricky to implement without bloating up the API.

Once this server is complete, ideally I would try to fire up 2 instances, 1 on my VPS, 1 on my friend’s VPS and grab a reserve-proxy/load-balancer to test run this thing, hopefully with distributed MongoDB/CouchDB as well.

So why JSON-RPC but not RESTful? I personally find  JSON-RPC more convenient for programmers as the design was to call a method (Procedure Call) remotely, thus the name RPC. For me personally it’s more intuitive than sending GET/POST/PUT/DELETE. JSON-RPC servers also serves 1 entry point instead of RESTful multiple URLs which IMO makes the driver layer a little bit easier to manage. The other reason was that it’s message type is JSON and NodeJS is in JavaScript. That actually got rid of a lot of serialize/deserialize/type conversion code.

One of the challenging things about JSON-RPC and NodeJS (or at least challenging for me coming from a heavy static-typing Java background) is that everything is in callback and it’s dynamic typing. For example, here’s how I get a response body in Java:


And here’s how I get response body in NodeJS:

var resString = "";
response.on('data', function(chunk) {
   resString += chunk;
response.on('end', function() {

The dynamic typing part is actually very flexible when ur writing a service, but when it comes to generating documentation for Introspection. Again, I’m spoiled since Java has Annotation and static-typing, but when I first started using jQuery, for example, it was pretty hard to figure out what the format of a parameter is. They’ve improved that a lot but the point is it might require a lot of manual efforts instead of auto-generate one.

Anyway this project has helped me a lot in learning NodeJS and JS in general. Some of the stuff below might be useful is ur doing this:





Yeah so my audio player in jQuery Mobile and NodeJS (part 2)

Ok so my prev post was about how to construct a playlist in jQuery Mobile with some simple NodeJS file serving. This one is to construct the audio player itself! Mine looks like this (cause I was kinda too lazy to do the styling properly):

My audio player

My audio player

Ok so it’s very basic: we got a toggle Play/Pause button, Next button, Prev button, the progress bar for download progress and a fake album art (cause I didn’t know how to extract mp3 metadata yet). The features that I implemented are pretty basic:

1. Play/Pause

2. Next/Prev Song

3. Progress bar for song buffering

4. Time Left

5. Auto-play the next one if this one ended

6. Header shows song name

The HTML structure itself is rather simple as jQuery Mobile does most of the styling for u:

<div data-role="page" class="player">

    <div data-role="header">
        <h1>My collection</h1>
    </div><!-- /header -->

    <div data-role="content">

	<div class='cover-art' style='text-align:center'>
		<audio src='music/kpop/ttl2.mp3' preload autoplay></audio>
		<img src='images/no-album-art.png' />
    </div><!-- /content -->
    <div data-role='footer' style='text-align:center'>
	<p class='track-info'>
		<span class="song-progress">
			<input type="range" min="0" max="100" value="0" />
		<span class="timeleft"></span>
	<div class='playback' data-role="controlgroup" data-type='horizontal' style='text-align:center'>
<button class='playback-prev' data-icon='back'>Prev</button>
<button class='playback-play' >||</button>
<button class='playback-next' data-icon="forward" data-iconpos="right">Next</button>
<script type="text/javascript">
	$('div.player').bind('pageshow', function(ev, ui) {
		if (!$(this).attr('data-init')) {
			Player.init('div.player.ui-page-active', $.getUrlVar($(this).attr('data-url'), 'song'));
			$(this).attr('data-init', 'true');
</div><!-- /page -->

So again, I bind some initialization to the “pageshow” event of the page and make sure it doesn’t get initialized twice. Since the href in each <li> points to the same page but different parameter, jQuery loads this again every single time even if it’s the same one. This only prevents the forward history button to reload the song. However, this does not prevent having multiple songs playing at the same time cause jQuery mobile loads those as different div. You can customize the changePage behavior when user clicks on a <li> but I didn’t, just to keep it simple.

The parameter is stored in the main player div (with selector “div.player”, class “ui-page-active” indicates its the active one) so $.getUrlVar just extract the parameter song from it (which indicates the song index):

	getUrlVars : function(string) {
		var vars = [];
		var hash;
		var href = string ? string : window.location.href;
		if (href.indexOf('#') > -1) {
			var hrefArr = href.split('#');
			href = hrefArr[hrefArr.length - 1];
    		var hashes = href.slice(href.indexOf('?') + 1).split('&');
		for (var i = 0; i < hashes.length; i++) {
			hash = hashes[i].split('=');
			vars[hash[0]] = hash[1];
		return vars;

	getUrlVar : function(string, name) {
		return $.getUrlVars(string)[name];

Pretty simple, just basically splitting the data-url field into a map of parameter names and values. The Player.init function takes in the parent div selector (so that I can locate the control relative to the parent div) and the song index. I basically keep track of all the control DOM elements:

var $next = $(div + ' button.playback-next');
var $prev = $(div + ' button.playback-prev');
var $play = $(div + ' button.playback-play');
var $trackInfo = $(div + ' p.track-info');
var $songProgress = $trackInfo.find('.song-progress');
var $loading = $songProgress.find('.loading');
var $timeLeft = $trackInfo.find('.timeleft');
var $slider = $songProgress.find('.ui-slider');
var $handle = $slider.find('.ui-slider-handle');
var $title = $(div + ' h1.ui-title');
var $audio = $(div + ' audio');
var audio = $audio.get(0);

I have this habit of prefixing jQuery objects with $ to distinguish from actual DOM element ($audio is the jQuery-wrapped object of audio). Play/pause is really easy:

$ {
    var $buttonText = $(this).parent().find('.ui-btn-text');
    if (audio.paused) {
        $audio.attr('data-state', 'play');;
    else {
        $audio.attr('data-state', 'pause');

Prev/Next is also straightforward:

$ {
    var state = $audio.attr('data-state');
    var current = parseInt($audio.attr('data-current'));
    Player.getSongPath(current + 1, $audio, $title, function() {
        $audio.attr('data-current', current + 1);
        if (state == 'play') {
$ {
    var state = $audio.attr('data-state');
    var current = parseInt($audio.attr('data-current'));
    Player.getSongPath(current - 1, $audio, $title, function() {
        $audio.attr('data-current', current - 1);
        if (state == 'play') {

So we’ve done 1 and 2. Let’s jump to 5 cause its also easy:

$audio.bind('ended', function(ev) {

I did 6 as a separate functionality that ping the server for the song’ path, change the audio source and also title:

getSongPath: function(index, $audio, $title, fn) {
    $.post('playlist?song=' + index, null, function(data) {
        $audio.attr('src', data.result);
        var filenameArr = data.result.split('/');
        var filename = filenameArr[filenameArr.length - 1];
        if ($.isFunction(fn)) {
    }, 'json');

For some reason I put null in the POST request data instead of the actual data (song=2) cause I wasn’t getting that data on the server side (I tried req.body, req.query and everything… didn’t seem to show up, will look into it a bit more). OK now lets get back to 3:

if (!$loading.get(0)) { //this inject the white loading bar before the handler
    $handle.before('<div class="ui-slider loading" style="width: 3%; float: left; top: 0; left: -3%; background-color: buttonface;"></div>');
    $loading = $slider.find('div.loading'); //update var
$audio.bind('progress', function() {
    var loaded = parseInt(((audio.buffered.end(0) / audio.duration) * 100) + 3, 10);
        width: loaded + '%' //change width accordingly
var manualSeek = false;
var loaded = false;
    top: '-50%' //somehow I think the styling of footer and handler conflicted and messed it up so I had to bump it up 50%

I actually didn’t know how to get the current time of the audio but after googling around and looking at audio attributes, things got a bit clearer. Here’s 4:

$audio.bind('timeupdate', function() {
    var rem = parseInt(audio.duration - audio.currentTime, 10),
        pos = Math.floor((audio.currentTime / audio.duration) * 100),
        mins = Math.floor(rem / 60, 10),
        secs = rem - mins * 60;
    $timeLeft.text('-' + mins + ':' + (secs > 9 ? secs : '0' + secs));
    if (!manualSeek) {
            left: pos + '%'
    if (!loaded) {
        loaded = true;

Ok so that’s how I made a sorta functional audio player. There’re still problems with it but hopefully this DIY guide gave u some idea on how to control the audio element manually.

Tagged , , , , , , ,

What I’ve been up to… (a.k.a making an audio player using jQuery Mobile & NodeJS) Part 1

So I recently signed up for a VPS from AlienVPS at a ridiculously low price and guess what, it crashed on me twice today… -_- But $19/month is still pretty darn cheap. At least it gave me some sandbox to play around with NodeJS and jQuery Mobile.

OK so far NodeJS has been rather simple and straightforward. I actually use Express framework on top of NodeJS which ease the work a little bit. However, I can really see how this can become complicated really really fast. 1st of all, I sorta have to implement all the HTTP protocol code manually in NodeJS (except for 500 Internal Error and 200 OK I think). So that includes 404, 403, blah blah. Not that it matters that much except I wanna maximize my site traffic by taking advantage of Search Engine bots. Well I disallow everything in robots.txt right now so if those bots behave, I should be good. You can check it out at but please please don’t spread it around or I’m gonna have to shut it down due to my limited bandwidth. The app is still buggy since it’s a work in progress but a refresh should make it behave a bit better.

Anyway, now the 1st thing a web server should be able to do is to serve static web pages and it can be achieved pretty easily:

var app = require('express').createServer();
var fs = require('fs');
var public_path = 'public/';
var PORT = 8080;
app.get('/', function(req, res) {
        res.sendfile(public_path + 'index.html');
app.get('/*', function(req, res) {
        var page = req.params[0];
        res.sendfile(public_path + page);

Easy enuf… Now I want to query some specific stuff like, idk my KOREAN POP playlist!!

var songs;'/playlist.html', function(req, res){
        Controller.handlePlaylist(req, res);
var Playlist = {
        get : function(name) {
                return fs.readdirSync(public_path + MUSIC_PATH + name);
var Controller = {
        handlePlaylist : function(req, res) {
                if (!songs) { //lazy-initialize this
                        songs = Playlist.get(req.param('playlist'));
                var index = req.param('song');
                if (index && index >=0 && index < songs.length) { //if I query a specific song number, give me the path to the song
                        res.send({ 'result' : MUSIC_PATH + 'kpop/' + songs[index] });
                } else { //otherwise give me the whole list
                        res.send({ 'filenames' : songs });

Now when I hit up playlist.html?playlist=kpop with POST, I can get my playlist and playlist.html?song=1 with POST gives me the 2nd song. This is a simple enuf song serving mechanism that will help me build my audio player.

Playlist Page

Playlist Page

Since I’m not using any view rendering engine, in playlist.html I actually have to use the trick of loading the file 1st, then make an ajax call to populate the data. This gets very very tricky with jQuery mobile since it doesn’t have a lot of control events on when it’s done rendering and what not. This combines with ambiguous timing from AJAX callbacks can lead to a pretty disruptive UX (I’m still having trouble with synchronizing stuff in JavaScript). But anyway, the playlist.html has a pretty simple structure:

<div data-role="page" class="playlist">
    <div data-role="header">
        <h1>My collection</h1>
    </div><!-- /header -->

    <div data-role="content">
        <ul data-role="listview" data-inset="true">
                <li data-role="list-divider">Kpop</li>
    </div><!-- /content -->
    <script type='text/javascript'> 
    $('div.playlist').bind('pageshow', function() {
        var $page = $(this); // to use inside callback since "this" in the callback function is different
        if (!($page.attr('data-init'))) { // Initialize once
                $.post('playlist.html?playlist=kpop', null, function(data) { //retrieve the data
                        var i;
                        var filenames = data['filenames'];
                        var $playlist = $page.find('ul[data-role="listview"]');
                        for (i in filenames) { //populate the list of songs
                                $playlist.append('<li><a href="player.html?song=' + i + '">' + filenames[i] + '</a></li>'); 
                        $page.attr('data-init', 'true');
                        $playlist.listview('refresh'); //refresh the view
                }, 'json');

</div><!-- /page -->

The HTML structure itself is simple the the JavaScript is kind of a hack. “Pageshow” event in jQuery Mobile gets invoked after page has been initialized (a.k.a after jQuery converts basic elements into its themed mobile looks). Why not “pagecreate” or “pagebeforecreate”? Because the callback is actually an AJAX call to grab the data and listview can only be refresh after it’s been initialized (also not guaranteed in the previous 2 event hooks). If I were to use a view rendering engine to populate the data, then send across the wire, I wouldn’t have had this problem so… something to look at next time.

OK so that’s the easy part, I write next time about how to actually make the player cause that took me like 3 days… >.< sleep now!

Tagged , , , , , ,

Why I’m hooked on JavaScript!!

Now I’m apparently not even an expert at JavaScript but I’ve had some exposure to it in 1 way or another, mainly through using jQuery in my last project (which I totally dig!!). I’m still working on web development in general and that project in particular right now and my friend and I have been going back and forth between the popular languages/frameworks like Ruby on Rails, Django and NodeJS. Honestly, I’m kinda a fanboy for NodeJS… I really am. Here’s why:

Now JavaScript has been mainly used for client side in web browser. The interesting thing is that each big boss (Mozilla/Google/IE) seems to develop their own JS Compiler. The Google V8 Engine (which NodeJS is built on) seems to be the fastest one right now. I was pretty amazed when JS was used as a server side technology but if you break it down, I wouldn’t say it’s impossible.

IMHO, I guess the important thing about writing a server is to handle sockets and a few protocols (HTTP, TCP… stuff like that). Now protocols are (again, IMO) rules of how information should be positioned in a packet so you can do that with pretty much any language. Traditional server technologies like Apache Tomcat, Glassfish… have been spawning threads to handle each incoming request. That’s not necessarily a bad thing but the bottlenecks seem to come down to IO so there’s been drawbacks in terms of blocking threads when u do a long query and such. Now u can overcome that by implementing distributed systems with a load balancer on top, along with some request filtering and routing, at which point it becomes kinda costly.



NodeJS uses Event Queue which puts everything in there and does event loop and uses callback functions. So technically there’re no blocking and the app is very scalable, theoretically. With that said, a long query gets run and once it’s done the callback is invoked. Now that sounds kinda cool.

OK so it can handle HTTP request, what else? Another important thing I would say is the database, which kinda comes down to separate vendor. How does the server connects to the database? Using a driver! Java gets an advantage cause it maintains an interface for vendors to write drivers based on. RoR supports a couple of database technologies and so is Django. There’re enough database vendors and plugins out there that it’s gonna take a while for NodeJS to catch up on that front, I assume.

What else? Some process manipulations like File IO and such, cause apparently the server will need to, at some point, modify some file resources, invoke some processes and stuff like that. NodeJS does have this, although I’m not sure how stable/mature it is. But that should take care of it.

Now I’ve always thought that RoR and Django were booming due to their ability to do rapid prototyping/development really really fast. You can get started on a webapp at a reasonably good pace with those, thanks to ORM and dependency injection and all the fun stuff. Honestly, I’m kinda a purist to a degree that I don’t hate abstracting 1 language over another, but magic can be disastrous sometimes. I believe that I should hand-write SQL queries and HTML/CSS instead of using ORM and compiled HTML (such as erb/jsp/asp…) cause you get a better control over it. It gets hard to manage after a while but it’s easier to identify the bottleneck and optimize it, instead of optimizing RoR and hoping that would optimize SQL.

Besides, using compiled HTML puts more load on the server than client, which in a way provides a more stable outcome since client machines/browsers vary a lot. But again, back to mixed language kinda thing, gotta be careful.

1 of the things I found that is challenging to JavaScript is actually synchronization. Now according to my knowledge, browser JS is executed in a single thread so how come synchronization is a big deal? Because browsers render different things in a different threads (you’ve been wondering how they load part by part, right??). Now what that means is, browsers render images/HTML structure/CSS styling/JS separately in different threads so its was such a pain for me to get a consistent User Experience. I honestly haven’t dug around enough to solve this issue since the nature of callback is just like that. Putting timeout and synchronization variables don’t seem to suffice…

Wait, so what about setTimeout and setInterval and stuff, aren’t they supposed to give a better control? Well FYI, setTimeout to 1000ms doesn’t mean it’ll get executed after 1000ms. It means that’ll get executed after AT LEAST 1000ms (cause it’s a queue…) so yeah, still digging around.

Back to my project which is a CMS now might turn into an RDS, I honestly think all we need are those important features listed above. Worst case is that we write a thin layer of wrapper around the tools we want to use, then invoke shell processes for those. But anw, it comes down to a light, thin server framework cause we probably don’t need all the big guns (which can take tons of time to tune). So yeah, it’d be great to set foot on that!!!


Tagged , , ,

The Social Network won 4 Golden Globe awards!! Duh!

Ok so I’m definitely a BIG fan of this movie called “The Social Network”, a.k.a Facebook so I’m pretty happy that it won 4 awards at the Golden Globe. I would say the movie dramatized a lot of Facebook history but what movie doesn’t. The thing I like the most about it can be summed up to “been there done that” (well for like the 1st 5 minutes of the movie, apparently I haven’t become mad rich yet).

The Social Network

The Social Network

So I’ll spoil a lil bit here: The movie is about Mark Zuckerberg, founder of Facebook and how it became popular. It started out as him getting dumped at the bar, then running back to his dorm and do his magic hacking (which I’ll explain in later posts) to create the site (along with some girlfriend revenge blog posts). The site totally attracted major traffic which brought him to Harvard’s Board of IT (or sth like that). After that he was approached by the Harvard twins and their friend to discuss about the idea of Facebook. They hope he would build that for them and it’s gonna be big, which it did just that its not theirs anymore.

I found the movie uber inspirational because dang, I was that kid. Actually any CS-major student (or at least the one I know) was that kid during college. In case u guys don’t know about the software industry, I feel like it’s 1 of those that doesn’t require a huge amount of capital to start with, unlike finance, manufacturing, engineering or pharmaceutical. All u need is pretty much a $1000 computer and probably a $5/month hosting service. In fact a lot of big software companies start with open-source (a.k.a free) tools. Once they got the hype, they offer premium services that start generating profits and such.

When I was in college doing open-source projects and research, I always hoped to make it big. I was often told that what I made was gonna be used by a lot of people and everything so design it that way, design the software for maintainability and extensibility. What that meant was to make something that when u leave, someone can take over easily (maintainability) and that would be extended easily (extensibility). But such things were very delusional which led me to choices I regretted later on.

One of the things I learnt while working is to quantify requirements. When I was doing work for my professors, I never asked so how many (in numbers) people do u think are gonna use this, how fast (in seconds) it should run, who (as in names/backgrounds) will take over my product. Why? Cause I was stupid and intimidated. I still am. But when it comes to product assessments, numbers rule.

But anw, after the movie, I felt like my friends and I were doing exactly what Mark was doing. Did we stay up till 4a.m coding while intoxicated? We sure did. Did we crash the CS server and wake the Dean of IT up at 4a.m? we also did that (just that we woke the CS Department Head up cause u know, he ain’t Dean. We didn’t know the Dean’s number anw). The thing is, Mark made it cause he took the risk. We didn’t (not to say that if we did we’d make it, but u got the point). Thus, the movie made me feel like that was my college life, in another direction…

I never took the risk cause I was Asian (not to be racist!) and I was raised to have the mentality of never taking risk. My parents rarely encouraged me to “go for it”. Instead, they always told me “what if u fail?”. Therefore, my whole life has been revolving around making plan B and making sure plan B works even when I have to abandon plan A. With that said, I’m gonna finance a Mini Cooper!

PC still can’t play Xbox games… WHYYY??

So it’s been a LOOOOONG time since I posted and honestly, I miss blogging. It’s pretty much the 1 motivation for me to keep up with technology (to brag about it on my blog, psh duh!! jk Smile with tongue out) while working in a company using technologies in the 70s.

Ok back to the main topic, the reasons were pretty obvious at least last year when Core i5 hasn’t come out and GTX 4 series didn’t exist. PCs weren’t strong enough to handle all those eye-candies that Xbox 360 has to offer, duh! But now that the technologies seem to be on par. I’m still wondering why this hasn’t happen.

GeForce GTX 460

GeForce GTX 460

If you don’t know the general specs of Xbox 360, it’s actually not that fancy: CPU is a custom IBM core with 3.2 clock speed having 3 cores on a single die. Each core can handle 2 threads so that makes pretty much 6 threads running consecutively. The GPU used to be pretty top notch, but not now as it’s a merely 500Mhz ATI Graphics Card with only 10Mb of embedded RAM. The system itself only has 512Mb of RAM.

Xbox 360

Xbox 360

Except the CPU, this spec is absolutely nothing compared to any decent gaming PC out there. But there’re a few things the Xbox is better at, actually MUCH better at.

  1. The custom ATI Graphics Card uses Unified Shader Architecture, which handles both Pixel Shaders and Vertex Shaders in the same pipeline. Now the card itself has 256-bit memory bandwidth which is pretty much the same as a mid-to-high end GeForce Graphics Card nowadays, yet handling 2 at 1 doubles the efficiency, which is pretty badass!! (I say “pretty” too much)
  2. Core i5 right now although uses 45nm technology with a whole bunch of other stuff, still technically can’t beat Xbox 360 core, mainly due to the dedication of architecture. Xbox core is clearly 100% for gaming and each thread is designed to handle either sound, physics engine, collision… depending on the game designer him/herself. Intel core i5 is strictly for multitasking and handling general instruction sets from the OS. Implementing the Intel HD Graphics in there doesn’t really help at all!!
  3. Although the ATI has only 10Mb of embedded RAM, its bandwidth is NASSSSTY!!! The same 256-bit in normal GPU was actually hardly optimized to its fullest potential, which comes back to point 2 (different design intentions)

Anyway this has been bugging me for a while so I gotta find out. BTW I’m using M$ Writer for the 1st time so we’ll see how that turns out. Aight have fun and keep on rolling guys!!!

Tagged ,

iPhone 4 issues are myths!!!

Alright they’re not exactly myths, but not THAT many people experienced those problems. I guess the news just always try to criticize and sabotage pretty much any new consumer products that came out, iPhone being the worst out of them. But hey, I got myself and iPhone 4 and I didn’t get any death grip, yellow spot or anything like that. The experience is PHENOMENAL (I spelled it right, right?)

Anyway, iPhone 4 is mad fast, probably because I was using the iPhone 1st generation, which apparently could play ChaosRings at around 8fps. The iPhone 4 is just smooth with almost no lag. Facebook and Yahoo Messenger did crash on me once. The annoying thing about iPhone 4 is that now it actually doesn’t close any app. Pressing the Home button puts the app to sleep I guess, as I still saw it in the Multi-tasking interface. I’m myself a performance freak so I’m actually pretty concerned about this regarding the battery life of the product.

The retina display is off the hook, along with new camera. I took this picture with the phone’s camera and it turned out surprisingly good!!! Can’t wait for the FaceTime tryout thing with my friends 🙂

Coney Island through iPhone 4

Coney Island through iPhone 4

The battery life itself is pretty decent actually. I walked around SoHo using GPS and compass and it lasted me for almost the full day. I did dropped by Apple Store SoHo to sneakily charge the thing a bit but overall I’m pretty satisfied with the phone. I haven’t actually tried out the whole gyroscope thing (gyro = delicious lol, just FYI) cause I didn’t really know any app that utilized the feature 😛 Will definitely do in the future.

Alright now move on to the life story part, gosh I miss Hanoi soooooo much. Good things do come to an end and I guess I’m just not ready for it yet. I think I did go through the same feeling when I left high school cause Hanoi is so small, everybody lives in “walking” distances. I talked to a couple of my friends and there’re already gaps between us since I’m not physically there anymore. Anyway, once I move to Madison and establish my new life, hopefully this will be much better as I’m definitely pretty “sociable” and “co-dependent” I guess… gotta turn up that radio now!!! (I put radio on just to get background noise)

Kk guys I think I forgot to say have fun and keep on rolling last time but here it is!!! Kabooom baby!! Dang I sound kinda depressed.

Tagged ,