Google PlusFacebookTwitter

Where’s the beef, Blogger.com?

By on Jul 13, 2011 in Tech Takes | 13 comments

There’s one good reason commenting on a blog hosted on Blogger.com makes me mad: despite being linked to a Google account, it’s the one Google service that doesn’t automatically log you in. With blogs, this means once you’ve entered a comment, you’re redirected to a login page and then redirected back to comment page Wait, I didn’t get the option to subscribe to follow-up comments via email! Now that I have had to re-login, I see the option for that. So…leave another comment just to enable email updates? And the comment field cannot be empty? I’ll probably have to delete the second comment because it will probably be worthless? Fuck you very much, Blogger. FUCK YOU! I abandoned Blogger.com’s blog hosting service long ago to get full creative control over my blog. Blogger’s service is not bad per se as much as it is utterly neglected. For the purpose of making press releases and pretending to still being the cool kids on the blog block, they put together a half-decent effort in Blogger In Draft. What isn’t so cool is that Google takes months / years to push these improvements out of perpetual beta testing. The majority of Blogger users don’t know, don’t care, or can’t be bothered with the hassle of shifting to another blogging platform even when they feel features are lacking. Even though the features have been developed as ‘Blogger In Draft’, they simply aren’t pushed out for greater public use! Don’t get me started about the schizophrenic look Blogger blogs are forced to adopt because of the platform. Why should I be taken to a separate site altogether to post comments (as it sometimes does)? I know that’s an option offered to the user but why the fuck would you even offer it to your users if you care about a pleasant user experience! Why does clicking on an ‘About Me’ link take me to another, differently-styled site altogether? Honestly, WTF is the point of ‘Google Friend Connect’? Yay, I clicked a button. Now what? Where’s the beef? Blogger.com templates are the metaphorical equivalent of Ronald Weasley wearing dress robes from the last century. Giving users the choice to change colours and set background images sends everyone back to the Geocities-era. This is not ‘customisation’! I am fed up of the same old two-column, here’s-a-set-of-links-in-sidebar design when I visit Blogger.com hosted blogs. The problem isn’t that it’s ‘plain’, but the platform itself limits what you can do. I have never come across a single Blogger.com blog whose design has delighted me when reading text on it. Add to that the verbal diarrhoea – and the users are blame for this – of chat boxes, buttons, ‘visitor maps’, virtual pets, ‘award showcases’ and whatnot. (Even worse-off are those who still don’t know / care about shifting away from LiveJournal, who have to deal with a seriously antiquated system. On second thoughts, perhaps I don’t want all those spinning GIFs of Edward Cullen or whatever-the-fuck-the-name-of-Wolf-Boy-is to run over WordPress.com.) Then there are bugs such as the one I mentioned that have been around for ages, yet there’s no way to file bug reports with Google. Sure, they probably have a ‘contact us’ form somewhere that is only revealed after browsing through ten pages in Blogger help and many patronising Are you sure you haven’t seen this help FAQ… screens. Going by my experience with Google customer support, they probably print out all those contact form submissions from they shiny Chromebooks using Google Cloud Print and use it to wipe their ass. Who gives a shit about user experience? If Google really is dog-fooding the Blogger service (as it often brags about it does with other projects its developing), then how can they not notice the that are frustrating about the user experience for years now? How can they not see – even if they continue to have millions of users – that their community is...

Staying connected in Singapore: A guide to phone companies, mobile data usage, and international calling

By on Jun 19, 2011 in Reviews, Tech Takes, Travel | 17 comments

My last couple of blog posts have been angsty, because when you are twenty you Need To Rebel and Stick It To The Man for It Is The Cool Thing To Do and Eff Them… (See what I meant by being cynical of my own cynicism? I’m not making this up! I am genuinely that conflicted internally of what I feel about my own beliefs.) Deep breaths, Banerjee, deep breaths. Calm down. Reach your Zen state. And tidy up your fucking desk. So, for a break from Sticking It To The Man, I decided to help out The Man instead by writing a guide on cellphone companies and Whatever Else The Title Promised You. This is first in a series of informational posts that I intend to write, which I hope will be useful for transitory residents of the island nation of Singapore – tourists, exchange students, foreign students, expats, illegal immigrants, and pirates. I don’t promise anything interesting for my regular readers – except for a shocking statistic in the section on mobile data prices and a lone joke about a web telephony service that leverages on a racist Spanish stereotype. *** The Basics Singapore’s telecom sector is an oligopoly with three operators: SingTel (government-backed, 46% market share), StarHub, and M1. All three operate a GSM-based network with support for 3G handsets. The only serious implication on this for most visitors to Singapore is that if the current cellphone you own operates on a CDMA-network – as is the case with a few (albeit large) American networks – you will be unable to use it in Singapore. Most modern GSM-handsets come with dual-band / tri-band / quad-band support so they should work in Singapore. Asia, Europe, Middle East, and Africa generally use the same frequency bands; the odd-one-out is America again, so if you’re visiting from the US then you need to double-check whether your handset will work. Protip: In Singapore, the term ‘handphone’ is most commonly used. People will understand though if you use equivalent terms like cellphone or mobile phones; it’s usually visitors who get confused when ‘handphone’ is used. The easiest way to get connected is to get a prepaid (or ‘pay-as-you-go’, if you prefer) SIM card. You can buy one from any operator-run outlets, convenience stores (7-Eleven, Cheers, Fairprice…) as long as you have your passport with you. The details page of your passport will be photocopied / scanned for registration purposes. The only advantage of buying at an operator’s own outlet is that you can choose the phone number that you get – and if you have a fetish for specific numbers then you might just turn out to be a lucky bastard. There is no waiting period for SIM card activation. (India, as always, has insanely strict rules for issuing prepaid SIM cards – forms need to be filled, passport photo and proof of residence is required, there’s a waiting period of 2-3 days. Think about how hard it must be for tourists! I’d be extremely annoyed if I came across equally strict laws in any of the countries I’ve travelled to. ) A new SIM usually costs S$15-20, with S$5-10 calling balance included. In such a competitive market, there’s isn’t much price differentiation among the three operators for basic services such as voice and text, so it doesn’t make much different which operator you choose if all you want are the basics. Typical local call rates range from 8-22 cents / minute for voice calls (depending on time of day) and 5 cents / text (local) or 15 cents / text (international), so calling / texting is fairly cheap for light usage. Take note, however, that in Singapore you are charged for incoming voice calls too at the outgoing local voice call rate; this comes as a shock to visitors from countries where it’s not standard practice to do so. If you expect to receive a lot of incoming calls, you can get the incoming call charge waived by paying a daily charge of 60 cents instead; the procedure for this differs from operator-to-operator but should be included in the start guide included with your SIM. Another thing you should be prepared for is that customer care hotlines are not operated 24/7 and often there are call charges applied to speak to customer care (albeit a reduced price). Recharge vouchers can be bought at any convenience store or operator outlet. You also have the option of paying for a recharge online via credit card. Although in theory you can buy low-value ‘top-ups’ of S$5 too, I have rarely found these on sale. Top-up vouchers of denominations S$10 and above are available widely. If you are a heavy user, watch out for promotional top-ups: all three operators have specific recharge denominations, say, S$30 for which they give ‘S$130 value’. The way this works is that the top-up denomination – S$30 in this example – is added to your ‘main’ calling balance, and deducted when you make international calls or access data; an additional S$100 is added as ‘special’ calling balance, and deducted for all incoming voice calls and all local outgoing calls. The catch is that the ‘special’ calling balance is time-limited – usually 30 days – and then expires, but your ‘main’ calling balance never affected by time restrictions. Yes, it is as terribly complicated as it sounds. You need to...

reCAPTCHA, spam, and (Vanilla) Forums

By on Apr 25, 2011 in Tech Takes | 6 comments

I discovered what I consider to be fairly serious issue with the reCAPTCHA authentication system today, and wanted to share this. I’m fairly sure not many know these facts, which can affect a lot of forum owners / administrators. I run a forum using Vanilla Forum at gyaan.in – regular readers of this blog would know about it. A couple of months ago, I upgraded the forum to the new, redesigned Vanilla Forum 2.x version that comes with built-in support for registration verification using reCAPTCHA. Until the 1.x branch, out-of-the-box there was no way to pre-approve registrations; a moderator had to approve each account manually. (This is what gyaan.in used too.) With a function as crucial as user registration I didn’t want to make modifications only to have to re-modify and test it every time I had to apply an upgrade patch. So when version 2.x came along with baked support for reCAPTCHA, I was happy to jump on-board and remove the approval process. (A move that I must admit was controversial within the gyaan.in community and the moderators.) Over the past few weeks, I noticed that gyaan.in’s email inbox was filling up with a considerable number of mail delivery failure notifications for the initial email sent right after successful registration. I didn’t give much thought to it as I (incorrectly) believed the first step in the new Vanilla Forum sign up process was a verification email. It turns out that it is not – the system sends an email only once the user has been authenticated. Had I known this, the number of mailer daemon messages should have set alarm bells off already. Today, one of the members (Shreyans) casually mentioned in a private message to me (in which he was discussing other technical issues that he was facing with the forum) that there seemed to be a lot of users on the board with the board with ‘nude’ or ‘naked’ in the username. To my surprise, I discovered that was indeed the case – and in many instances these user accounts had the same email address too. These were obviously spammer accounts, so I deleted them immediately. But that got me thinking how they could have gotten through. reCAPTCHA (now owned by Google) throws CAPTCHA challenges from a corpus of OCR-recognised words from Google’s text digitisation efforts. You might have seen this verification challenge on Facebook too some time. Two words are shown and you are told to enter both correctly to pass. Behind the scenes, reCAPTCHA doesn’t know what both the words are. One of the words has been positively identified by OCR and is kept as a ‘control’ word. The second word is not recognised by OCR; user input for that word is taken and stored into a database. Once enough users identify an ‘unknown’ word as the same word, the reCAPTCHA system uses that result for sending back the corrected word to text digitisation programmes and adds it to the corpus of control words used in the system. A well-known loophole is that it is possible to enter one word incorrectly and have reCAPTCHA consider the answer valid. What I couldn’t understand is how spambots could get past the control word. So I started playing around with the text I entered as reCAPTCHA response in Vanilla Forum’s registration page. I found that… if the number of characters entered for each word is correct; and, the words are entered as correctly as possible, except for one character (i.e., one character out of an entered word was deliberately incorrect) …then reCAPTCHA would authenticate the entry as correct! This issue is not isolated to the Vanilla Forum implementation of reCAPTCHA either, as you can achieve similar results using the demo form on the official reCAPTCHA website. I searched around for possible reasons for this and found this entry in the reCAPTCHA wiki: On the verification word, reCAPTCHA intentionally allows an “off by one” error depending on how much we trust the user giving the solution. This increases the user experience without impacting security. reCAPTCHA engineers monitor this functionality for abuse. It seems this is a problem-by-design. What seems to be crucial in equation seems to be the implication that this off-by-one error is allowed “depending on how much we trust the user giving the solution”. How exactly is this trust defined? I don’t think IP address blocking can be used (can it?), because the request for verifying inputs is sent by the server using reCAPTCHA tied to the specific public-private key pair of the site. Which means ‘block IP addresses that send large volumes of incorrect inputs’ cannot be used to define this ‘trust’, as the IP address would be of server rather than the spambot / client. Another possible yardstick for measuring ‘trust’ would be allowing one-off errors for typographically similar characters: ‘i’ / ‘l’, ‘a’ / ‘d’, ‘r’ / ‘n’, etc. However, I don’t think their system uses this either as in all my attempts, it accepted one-off errors for entirely different-looking characters, such as ‘s’ / ‘w’, ‘q’ / ‘f’, etc. reCAPTCHA is undoubtedly the most popular CAPTCHA implementation used on the Web these days, which makes this such a serious issue. A lot of forums and sites now use this de-facto because it’s a small way to pitch into the noble ideal of text digitisation, and also because presenting ‘real’ words appears to...

OOops: Why OpenOffice.org* Isn’t Nearly Good Enough

By on Mar 5, 2011 in Tech Takes | 6 comments

* Et tu, LibreOffice I have lived through a fair number of Year of the Linux Desktops, and consequently had the pleasure of using tolerating OpenOffice.org for years. In the light of recent news of the German Foreign Office migrating back to Windows XP from Linux, I thought of writing about one aspect this ‘interoperability’ that the German FO is crying about. I never felt OpenOffice.org to be lacking in any respect compared to Microsoft Office 2003 when I was in high school (back in the day, Office 2007 wasn’t out yet). OOo satisfied my requirements without costing me a dime. In hindsight, that was only because on most occasions I was printing out hard copies of documents for submission, and didn’t often have to bother about whether a document that was displaying as I wanted on my PC would display the same way on another. Now, in university, interoperability is a major headache for me. And yes, by ‘interoperability’ I mean the very narrow of definition of ‘whether document files work with Microsoft Office’. When working on project reports or some such, I often have to work on documents that have images, charts, tables formulas et al, i.e., fairly ‘complex’ formatting in as to how any element on the page is placed. More often than I like it, I find the formatting screwed up. Even for something as simple as taking a document to the library for printing, it has become second nature for me to export my ODF documents to PDF so that I have ‘assured’ formatting on library (Windows) machines. Let me illustrate my points with a few examples. I am the studio director at the student TV station in my (exchange) university and I need to edit studio scripts every week that is sent to me by the production assistant, edited on her Office 2010 install. This is how it looks when opened in Microsoft Office 2010 Starter Edition on my netbook. When I open the same file in OpenOffice.org 3.3.0, out of that 15 page document only TWO pages are displayed. I am not kidding. It actually shows up as ‘Page 1 / 2’. I can understand this is an Office 2007/2010 .docx format document and support for this “isn’t as good yet“, but jeez, showing 2 pages out of 15 is a major screw-up, isn’t it! Let’s see what out of the pages it can read does OOo show. On surface, it seems that OOo got most of the page displayed correctly, but a closer look will reveal page elements that are completely missing. The ‘SOVT’ icon is gone, so is the box with ‘S/I’, as well as adornments such as boxes around ‘ZOOM 5sec’. If I had to solely rely on OOo for my document needs, I wouldn’t have known these elements are missing. For me, this is a major problem as those are crucial directives that I need when directing the studio shoot. Even if I concede that this problem may be because OpenXML file support isn’t as ‘complete’ yet in OOo, that doesn’t explain why it utterly messes up page display with older .doc files, like this example below. Content from two consecutive pages was stacked on top of each other on one page along with previous edits even though show changes was turned off, rendering this file useless. Note that this file was created in Office 2007 and exported as .doc. Okay, so maybe the whole problem is with Office 2007/2010 creating corrupted files which it itself can figure out, but not other software. Maybe the files that I was trying to open was ‘too complex’ (debatable). How about a simple document – a table with two columns with text in rows – created in a word processor not developed by Microsoft – say, Mac OS X’s bundled TextEdit – and exported as a .doc file. Like this file below. This TextEdit-created file displays correctly in Microsoft Office 2007/2010 too. And here’s how the same file displays in OpenOffice 3.3.0. It isn’t as if these are two isolated incidents where OpenOffice.org has failed me in being passably reliable, even for freeware software. I have lost track of the number of times when my documents/presentations have been messed up by OOo. Sometimes, opening a .docx file created and saved using OpenOffice.org fails to render correctly on the same system with the same install of OOo. Figure that. And while Microsoft may have a conflict of interest, Doug Mahugh makes a seemingly well-reasoned argument how certain aspects of ODF file rendering is broken among variants of OOo itself like IBM’s Lotus Symphony. Throughout this blog post, I have mentioned OpenOffice.org as the errant document suite, but the new Document Foundation’s ‘LibreOffice’ fork isn’t any better. A few weeks ago, I was working on a presentation that I needed to send for a job interview. I created it in OpenOffice 3.3.0 one day (saved as ODP) and picked up work the next day on a new install of LibreOffice 3.3.1. (The Document Foundation advises users to remove OOo before doing a LibreOffice install.) I saved a PPT and a PDF version of the file in addition to the native ODP file (my usual routine) after I was finished. I opened the file Microsoft PowerPoint 2010 Viewer to check before sending it off, and got an error message telling me that the file...

Quick note on Chrome OS Cr-48 pilot programme

By on Dec 12, 2010 in Tech Takes | 9 comments

Google’s Chrome OS test pilot programme has generated quite a buzz, with even people who asked for stickers getting a shiny new Chrome OS notebook to test out. It’s early days of course – they aren’t selling these Chrome OS tablets until early 2011. I think the crucial factor to its success would be pricing – is it cheaper than normal netbooks? Google has ‘solved’ the always-on connectivity issue by bundling in a free 100 MB 3G data subscription from Verizon with the option to buy more in case a user needs it, and the ‘need it’ users certainly will. (100 MB is a pittance of data allowance – I use up more than that on my crappy Nokia ‘smartphone’ which doesn’t in have the same class of data intensive apps that iPhone / Android do.) Running applications ‘off the cloud’ (storing everything online) is something you can already do on existing netbooks, laptops, desktops – you can even get the same experience by installing Chrome Web Apps as you’d want on Chrome OS from the Chrome store. So if a Chrome OS netbook is priced higher or the same as a normal netbook, I don’t see why I should buy the former. The touted 10-second bootup speeds that Chrome OS has is not because it’s significantly better than others, but because it uses a solid state disk rather than a normal hard disk. Try booting Ubuntu on an SSD system and you’ll get similar startup times. (It’ll be a bit more, but come on – isn’t an extra five seconds worth it for having a ‘full’ system?) Chrome OS is essentially a Linux-based operating system just like Ubuntu, except that they are purposefully blocking access to anything other than your ‘online filesystem’. This is all moot for the casual user of course – they’ll love it. With Google’s marketing might, Chrome OS might even be a success in the way netbooks haven’t been. But they’ll potentially open themselves up for anti-trust lawsuits from their competitors. Google has been able to avoid such allegations till now in the search engine market simply by saying “Users can choose a different search engine anytime they want“. That’s the not the case with Chrome OS – you have to sign in with a Google account. When Google’s distributing 60,000 test notebooks at no charge, destroying 25 for this video must have been approved without so much as eyebrow being raised. Once you start using Chrome OS at home, you’d be forced to use it at office and other places too. That easy-sharing of documents with friends and family? Well, that just means they’ll have to sign up for Google Accounts too to access shared files. Chrome OS simply leads to a scenario where everything is tightly locked in to Google’s network, with not much hope of switching. You simply can’t copy your files and shift from Windows to Mac (say) as you can do with normal computers. If you decide one day to shift to Microsoft Office Web Apps instead of Google Docs, how do you migrate your data? What if you want to use Skype instead of Google Voice Chat? Skype doesn’t even have a web app version! I also don’t buy the argument some tech analysts have made that Chrome OS could be posturing itself as a cheap IT solution for enterprise use, at the long tail of the usage chain with adoption as point-of-sale terminals and mobile workforce. IT departments for companies are usually wary of vendor lock-ins, and though Chrome OS may be cheap to deploy I don’t reckon companies would want to give up complete control in the way that would be required of them. With this tight lock, with the user constantly signed in to Google, they have a pretty solid idea of what you do all the time, not just what you search. They’d want to capitalize on this rich amount of usage data by trying to serve more targetted advertising. If Google sticks on to its current vision AND Chrome OS becomes a success, it’s inevitable that their competitors will have a very strong anti-trust case in the courts. Such an anti-trust case could very well bring Google as we know it close to oblivion, just like what almost happened in United States vs Microsoft. Thank you, but no thank you Google. I’ll stick to my netbook which gives me complete freedom to do what I want. PS – If, however, you’ve already been seduced by Google Chrome OS’ s ‘always online’ vision but can’t try it out because you aren’t in the Cr-48 pilot programme, give Jolicloud a go. It’s an Ubuntu-based cloud OS much like Chrome OS; additionally, also an HTML5-based web-OS that you can try out in the Chrome Web Store. One of the complaints against Chrome OS has been that it doesn’t play Flash videos very well, which I’ve heard Jolicloud has sorted out (supports playback of HD Flash...