Monday, February 28, 2011

Many Gmail Users Can't Find Their Messages

Imagine loading Gmail and noticing that all your messages have been deleted. This is a real problem for many Gmail users who thought that they lost all of their messages. Here's one of the many reports from Gmail's forum:
Yes, whatever the error is on Google's end (and it clearly is that, not a hack, unless it's some kind of inside hack) it's basically reset my account so it's like a brand-new Gmail account. My contacts are intact, but nothing else--the folders have reset to default, my signature line is blank, the "theme" is changed back to the default and--of course--every single email from the last 7 years has vanished completely.

The Google Apps Status page mentions that "this issue affects less than 0.08% of the Google Mail userbase" and "Google engineers are working to restore full access". The users that are affected "will be temporarily unable to sign in".


This is a really important problem for Google and one of the biggest Gmail issues ever since Google's email service was released, back in 2004.

Update: A Google engineer says that the "accounts that are affected are currently fully disabled. We're in the process of changing this to be a Gmail only disable so you should regain access to other Google services soon. This will also mean email to these accounts stops bouncing and gets queued up for later delivery instead."

Update 2: Google says that only 0.02% of the Gmail users were affected. "In some rare instances software bugs can affect several copies of the data. That's what happened here. Some copies of mail were deleted, and we've been hard at work over the last 30 hours getting it back for the people affected by this issue. To protect your information from these unusual bugs, we also back it up to tape. Since the tapes are offline, they're protected from such software bugs. But restoring data from them also takes longer than transferring your requests to another data center, which is why it's taken us hours to get the email back instead of milliseconds."

{ via Engadget }

Google and the Value of Social Networking (Part 3)

Paul Buchheit, the ex-Googler behind Gmail and a former Facebook employee, answered a question about the rivalry between Google and Facebook.
There is an interesting competitive element there because Facebook is growing very fast, and obviously, Google would like to compete in the social-networking space. They have finally realized its importance, and they are finding themselves, maybe for the first time, with the realization that there is someone who is way, way ahead of them.

There was a moment with Microsoft that they assumed that Google was like, "Well, yeah, search isn't that important. And if it does become important, we'll just hire some people and we'll take over." They kind of thought it was something they could win really easily, and they underestimated the difficulty of it. I kind of feel like Google may have reached that same moment with social networking, where they realized, A, it's important, and B, it's really hard to win.

Paul is not the first ex-Googler who thinks that Google didn't understand the importance of social networking. Another former Googler said that "there is some belief at Google that their DNA is not perfectly suited to build social products", while Aaron Iba, who worked on the Orkut team, noticed that "social networking [was viewed] as a frivolous form of entertainment rather than a real utility".

{ via Avinash }

Videos From the Old Googleplex

In February 1999, Google moved from Susan Wojcicki's garage to "new digs at 165 University Avenue in Palo Alto with just eight employees" and in August 1999, Google relocated to Mountain View: 2400 E. Bayshore. Five years later, Google moved to "the new Googleplex at 1600 Amphitheatre Parkway in Mountain View, giving 800+ employees a campus environment."

Former Google employee Doug Edwards posted some videos from November 1999 of the old Googleplex. At that time, Google's search engine was only available in English, it didn't include ads or image search results and it was the only Google service.


Sunday, February 27, 2011

Heading for GDC

[This post is by Chris Pruett, who writes regularly about games here, and is obviously pretty cranked about this conference. — Tim Bray]

Android will descend in force upon the Game Developers Conference in San Francisco this week; we’re offering a full day packed with sessions covering everything you need to know to build games on Android.

From 10 AM to 5 PM on Tuesday the 1st, North Hall Room 121 will be ground zero for Android Developer Day, with five engineering-focused sessions on everything from compatibility to native audio and graphics. Here's a quick overview; there’s more on the Game Developer Conference site:

  • Building Aggressively Compatible Android Games — Chris Pruett

  • C++ On Android Just Got Better: The New NDK — Daniel Galpin and Ian Ni-Lewis

  • OpenGL ES 2.0 on Android: Building Google Body — Nico Weber

  • Android Native Audio — Glenn Kasten and Jean-Michel Trivi

  • Evading Pirates and Stopping Vampires Using License Server, In App Billing, and AppEngine — Daniel Galpin and Trevor Johns

Our crack team of engineers and advocates spend their nights devising new ways to bring high-end game content to Android, and a full day of sessions just wasn't enough to appease them. So in addition, you can find incisive Android insight in other tracks:

Finally, you can visit us in the Google booth on the GDC Expo floor; stop by, fondle the latest devices, and check out the awesome games that are already running on them. We're foaming at the mouth with excitement about the Game Developers Conference next week, and you should be too.

Hope to see you there!

Saturday, February 26, 2011

New Google Profile Search

Google has a new specialized search engine for searching Google Profiles. It has a better interface than the regular Google Profiles search feature, it's integrated with Google Search and it shows additional links from people's profiles.

This feature is not yet enabled in the interface, but you can search Google Profiles by adding &tbs=prfl:1 to a Google Search URL. Here's an example.

Google could use the data from user profiles to provide advanced search features like restricting profiles to people who lived in Chicago, attended Long Island University and are interested in sports.


{ spotted by François Beaufort }

Friday, February 25, 2011

Store More Photos and Videos in Picasa Web Albums

You've probably noticed that Picasa Web's storage counter shows that you have more free space than a couple of days ago. It turns out that this is not a bug.

"We recently made a change whereby any pictures 800 pixels and under don't count towards used PWA storage. The new GB numbers you're seeing are the result of quota recalculations that were made," informs a Google employee.

The new feature encourages users to upload smaller images. If you use Picasa to upload your photos, there's a setting that lets you pick the dimensions of the photos that are uploaded. The "small" option is recommended "for publishing images on blogs and webpages". Blogger users who resize their photos before uploading them will no longer have to buy extra storage if they're prolific.

Another important change is that "all videos under 15 minutes also don't count towards used PWA storage". That means you can now upload short videos to Picasa Web Albums without worrying about the file size.

Update: "Photos less than 800 pixels x 800 pixels and video less than 15 minutes long that are uploaded to Picasa Web Album, Blogger, or Buzz don't count towards your storage quota." (Picasa Web's help center)

Google Video's Strange Disclaimer

Google Video's homepage shows a strange message below the list of query suggestions: "In accordance with local laws and regulations, some results were removed from this list." The message is displayed for every query you enter, so it's unlikely that some of the suggestions are removed.


Update: Google says that "the appearance of the message on every search was in fact a bug and has since been corrected."

Google Docs, Image Search and Copyright

Google Docs lets you pick Image Search results and add them to your documents. That's a good thing, but this feature could have been better thought out.

Google restricts the results to images licensed as Creative Commons that can be used commercially and that can also be modified. These restrictions aren't always necessary, since not all documents are used for business purposes. Google also recommends to "only select images that you have confirmed that you have the license to use", but it doesn't link to the pages that included the images. Google doesn't even include a small caption next to the image with links to the Creative Commons license and the original web page.


While this feature makes it easy to add image search results to your documents, it doesn't encourage users to visit the web pages that embedded the images or to give credit to the image creators because it's quite difficult to find these pages. You need to visit Google Image Search, type your query, restrict the results to images "labeled for commercial reuse with modification" and find the image you've previously picked. That's a lot of unnecessary work.

Thursday, February 24, 2011

Picasa Web's Multiple File Uploader

Picasa Web Albums has finally improved the uploading feature. You can now select multiple images from a folder and upload all of them. After uploading images, you can add captions and delete the images you don't like. It's a long overdue improvement that's especially important if you don't use Picasa.

Another change is that you can now upload videos without installing Picasa.


Picasa Web's new uploader uses HTML5 APIs, so it's not available in Internet Explorer, where you still have to install an ActiveX control.

{ Thanks, Přemysl Brýl. }

HTTPS YouTube

After Google enabled by default encrypted connections to Picasa Web Albums, it started to become obvious that all Google services will soon switch to HTTPS. Probably the most unlikely candidate for this change is YouTube, Google's biggest bandwidth hog, but the unexpected happened: go to a random video and you'll notice that all the resources use HTTPS.


YouTube API's blog has recently announced HTTPS support for embedded videos. "We're planning a gradual expansion of HTTPS across other aspects of the site. The first place you may see HTTPS YouTube URLs is in our various embed codes, all of which currently support HTTPS in addition to the standard HTTP. Anyone can try HTTPS with YouTube embeds today—simply change the protocol portion of the URL from http to https." You can also enable "use HTTPS" when you generate the embedding code.


The performance doesn't seem to be affected and, if everything goes well, YouTube will probably switch to HTTPS in the coming months.

Google Cloud Connect for Microsoft Office

After three months of beta testing, Google Cloud Connect for Microsoft Office is available for everyone. The rebranded version of DocVerse, a software developed by the homonymous company acquired by Google last year, integrates with Google Docs and provides a bridge for Microsoft Office users who want to use online collaboration features without upgrading to Office 2010.

"Google Cloud Connect for Microsoft Office brings collaborative multi-person editing to the familiar Microsoft Office experience. You can share, backup, and simultaneously edit Microsoft Word, PowerPoint, and Excel documents with coworkers," explains Google. The software works with Microsoft Office 2003, Office 2007 and Office 2010.




By default, the plugin automatically saves online and syncs all the files you edit in Microsoft Office, but you can change this setting.


I created a new document in Word 2010, but Google saved it as a read-only Word file in Google Docs. Apparently, the document can only be edited using Microsoft Office and not using Google's online word processor. Since you can't even open existing files from Google Docs, this software seems to be too limited. It's useful if you and all your collaborators only use Microsoft Office and Google's plugin.

Google Recipe Search

Google Japan started to offer a recipe search feature last year. Now this feature is available in the US and for everyone who uses Google without country redirects.

"Recipe View lets you narrow your search results to show only recipes, and helps you choose the right recipe amongst the search results by showing clearly marked ratings, ingredients and pictures. To get to Recipe View, click on the Recipes link in the left-hand panel when searching for a recipe. You can search for specific recipes like [chocolate chip cookies], or more open-ended topics—like [strawberry] to find recipes that feature strawberries, or even a holiday or event, like [cinco de mayo]," explains Google.

Google finds recipes by detecting the pages that use markup like microdata, RDFa, and microformats for recipes. You've probably noticed that Google shows rich snippets for some recipe pages and sometimes includes thumbnails, total cooking time, the number of calories and user ratings.


The same structured data can now be used for filtering search results. For example, you can select certain ingredients, add restrictions for cooking time and the number of calories.


Recipe Search is one of the most obscure specialized search engines offered by Google and it's quite surprising to see it in the vertical navigation menu next to Book Search, Blog Search or Image Search. Google could create similar search engines for event search, people search and reviews search.

Google's landing page offers more information about this feature and suggests to "select Recipes in the left-hand panel on the search results page". Unfortunately, the option is not yet available for everyone.

Animation in Honeycomb


[This post is by Chet Haase, an Android engineer who specializes in graphics and animation, and who occasionally posts videos and articles on these topics on his CodeDependent blog at graphics-geek.blogspot.com. — Tim Bray]

One of the new features ushered in with the Honeycomb release is a new animation system, a set of APIs in a whole new package (android.animation) that makes animating objects and properties much easier than it was before.

"But wait!" you blurt out, nearly projecting a mouthful of coffee onto your keyboard while reading this article, "Isn't there already an animation system in Android?"

Animation Prior to Honeycomb

Indeed, Android already has animation capabilities: there are several classes and lots of great functionality in the android.view.animation package. For example, you can move, scale, rotate, and fade Views and combine multiple animations together in an AnimationSet object to coordinate them. You can specify animations in a LayoutAnimationController to get automatically staggered animation start times as a container lays out its child views. And you can use one of the many Interpolator implementations like AccelerateInterpolator and Bounce to get natural, nonlinear timing behavior.

But there are a couple of major pieces of functionality lacking in the previous system.

For one thing, you can animate Views... and that's it. To a great extent, that's okay. The GUI objects in Android are, after all, Views. So as long as you want to move a Button, or a TextView, or a LinearLayout, or any other GUI object, the animations have you covered. But what if you have some custom drawing in your view that you'd like to animate, like the position of a Drawable, or the translucency of its background color? Then you're on your own, because the previous animation system only understands how to manipulate View objects.

The previous animations also have a limited scope: you can move, rotate, scale, and fade a View... and that's it. What about animating the background color of a View? Again, you're on your own, because the previous animations had a hard-coded set of things they were able to do, and you could not make them do anything else.

Finally, the previous animations changed the visual appearance of the target objects... but they didn't actually change the objects themselves. You may have run into this problem. Let's say you want to move a Button from one side of the screen to the other. You can use a TranslateAnimation to do so, and the button will happily glide along to the other side of the screen. And when the animation is done, it will gladly snap back into its original location. So you find the setFillAfter(true) method on Animation and try it again. This time the button stays in place at the location to which it was animated. And you can verify that by clicking on it - Hey! How come the button isn't clicking? The problem is that the animation changes where the button is drawn, but not where the button physically exists within the container. If you want to click on the button, you'll have to click the location that it used to live in. Or, as a more effective solution (and one just a tad more useful to your users), you'll have to write your code to actually change the location of the button in the layout when the animation finishes.

It is for these reasons, among others, that we decided to offer a new animation system in Honeycomb, one built on the idea of "property animation."

Property Animation in Honeycomb

The new animation system in Honeycomb is not specific to Views, is not limited to specific properties on objects, and is not just a visual animation system. Instead, it is a system that is all about animating values over time, and assigning those values to target objects and properties - any target objects and properties. So you can move a View or fade it in. And you can move a Drawable inside a View. And you can animate the background color of a Drawable. In fact, you can animate the values of any data structure; you just tell the animation system how long to run for, how to evaluate between values of a custom type, and what values to animate between, and the system handles the details of calculating the animated values and setting them on the target object.

Since the system is actually changing properties on target objects, the objects themselves are changed, not simply their appearance. So that button you move is actually moved, not just drawn in a different place. You can even click it in its animated location. Go ahead and click it; I dare you.

I'll walk briefly through some of the main classes at work in the new system, showing some sample code when appropriate. But for a more detailed view of how things work, check out the API Demos in the SDK for the new animations. There are many small applications written for the new Animations category (at the top of the list of demos in the application, right before the word App. I like working on animation because it usually comes first in the alphabet).

In fact, here's a quick video showing some of the animation code at work. The video starts off on the home screen of the device, where you can see some of the animation system at work in the transitions between screens. Then the video shows a sampling of some of the API Demos applications, to show the various kinds of things that the new animation system can do. This video was taken straight from the screen of a Honeycomb device, so this is what you should see on your system, once you install API Demos from the SDK.

Animator

Animator is the superclass of the new animation classes, and has some of the common attributes and functionality of the subclasses. The subclasses are ValueAnimator, which is the core timing engine of the system and which we'll see in the next section, and AnimatorSet, which is used to choreograph multiple animators together into a single animation. You do not use Animator directly, but some of the methods and properties of the subclasses are exposed at this superclass level, like the duration, startDelay and listener functionality.

The listeners tend to be important, because sometimes you want to key some action off of the end of an animation, such as removing a view after an animation fading it out is done. To listen for animator lifecycle events, implement the AnimatorListener interface and add your listener to the Animator in question. For example, to perform an action when the animator ends, you could do this:

    anim.addListener(new Animator.AnimatorListener() {
public void onAnimationStart(Animator animation) {}
public void onAnimationEnd(Animator animation) {
// do something when the animation is done
}
public void onAnimationCancel(Animator animation) {}
public void onAnimationRepeat(Animator animation) {}
});

As a convenience, there is an adapter class, AnimatorListenerAdapter, that stubs out these methods so that you only need to override the one(s) that you care about:


anim.addListener(new AnimatorListenerAdapter() {
public void onAnimationEnd(Animator animation) {
// do something when the animation is done
}
});

ValueAnimator

ValueAnimator is the main workhorse of the entire system. It runs the internal timing loop that causes all of a process's animations to calculate and set values and has all of the core functionality that allows it to do this, including the timing details of each animation, information about whether an animation repeats, listeners that receive update events, and the capability of evaluating different types of values (see TypeEvaluator for more on this). There are two pieces to animating properties: calculating the animated values and setting those values on the object and property in question. ValueAnimator takes care of the first part; calculating the values. The ObjectAnimator class, which we'll see next, is responsible for setting those values on target objects.

Most of the time, you will want to use ObjectAnimator, because it makes the whole process of animating values on target objects much easier. But sometimes you may want to use ValueAnimator directly. For example, the object you want to animate may not expose setter functions necessary for the property animation system to work. Or perhaps you want to run a single animation and set several properties from that one animated value. Or maybe you just want a simple timing mechanism. Whatever the case, using ValueAnimator is easy; you just set it up with the animation properties and values that you want and start it. For example, to animate values between 0 and 1 over a half-second, you could do this:

    ValueAnimator anim = ValueAnimator.ofFloat(0f, 1f);
anim.setDuration(500);
anim.start();

But animations are a bit like the tree in the forest philosophy question ("If a tree falls in the forest and nobody is there to hear it, does it make a sound?"). If you don't actually do anything with the values, does the animation run? Unlike the tree question, this one has an answer: of course it runs. But if you're not doing anything with the values, it might as well not be running. If you started it, chances are you want to do something with the values that it calculates along the way. So you add a listener to it, to listen for updates at each frame. And when you get the callback, you call getAnimatedValue(), which returns an Object, to find out what the current value is.

    anim.addUpdateListener(new ValueAnimator.AnimatorUpdateListener() {
public void onAnimationUpdate(ValueAnimator animation) {
Float value = (Float) animation.getAnimatedValue();
// do something with value...
}
});

Of course, you don't necessarily always want to animate float values. Maybe you need to animate something that's an integer instead:

    ValueAnimator anim = ValueAnimator.ofInt(0, 100);

or in XML:

    <animator xmlns:android="http://schemas.android.com/apk/res/android"
android:valueFrom="0"
android:valueTo="100"
android:valueType="intType"/>

In fact, maybe you need to animate something entirely different, like a Point, or a Rect, or some custom data structure of your own. The only types that the animation system understands by default are float and int, but that doesn't mean that you're stuck with those two types. You can to use the Object version of the factory method, along with a TypeEvaluator (explained later), to tell the system how to calculate animated values for this unknown type:

    Point p0 = new Point(0, 0);
Point p1 = new Point(100, 200);
ValueAnimator anim = ValueAnimator.ofObject(pointEvaluator, p0, p1);

There are other animation attributes that you can set on a ValueAnimator besides duration, including:

  • setStartDelay(long): This property controls how long the animation waits after a call to start() before it starts playing.
  • setRepeatCount(int) and setRepeatMode(int): These functions control how many times the animation repeats and whether it repeats in a loop or reverses direction each time.
  • setInterpolator(TimeInterpolator): This object controls the timing behavior of the animation. By default, animations accelerate into and decelerate out of the motion, but you can change that behavior by setting a different interpolator. This function acts just like the one of the same name in the previous Animation class; it's just that the type of the parameter (TimeInterpolator) is different from that of the previous version (Interpolator). But the TimeInterpolator interface is just a super-interface of the existing Interpolator interface in the android.view.animation package, so you can use any of the existing Interpolator implementations, like Bounce, as arguments to this function on ValueAnimator.

ObjectAnimator

ObjectAnimator is probably the main class that you will use in the new animation system. You use it to construct animations with the timing and values that ValueAnimator takes, and also give it a target object and property name to animate. It then quietly animates the value and sets those animated values on the specified object/property. For example, to fade out some object myObject, we could animate the alpha property like this:

    ObjectAnimator.ofFloat(myObject, "alpha", 0f).start();

Note, in this example, a special feature that you can use to make your animations more succinct; you can tell it the value to animate to, and it will use the current value of the property as the starting value. In this case, the animation will start from whatever value alpha has now and will end up at 0.

You could create the same thing in an XML resource as follows:

    <objectAnimator xmlns:android="http://schemas.android.com/apk/res/android"
android:valueTo="0"
android:propertyName="alpha"/>

Note, in the XML version, that you cannot set the target object; this must be done in code after the resource is loaded:

    ObjectAnimator anim = AnimatorInflator.loadAnimator(context, resID);
anim.setTarget(myObject);
anim.start();

There is a hidden assumption here about properties and getter/setter functions that you have to understand before using ObjectAnimator: you must have a public "set" function on your object that corresponds to the property name and takes the appropriate type. Also, if you use only one value, as in the example above, your are asking the animation system to derive the starting value from the object, so you must also have a public "get" function which returns the appropriate type. For example, the class of myObject in the code above must have these two public functions in order for the animation to succeed:

    public void setAlpha(float value);
public float getAlpha();

So by passing in a target object of some type and the name of some property foo supposedly on that object, you are implicitly declaring a contract that that object has at least a setFoo() function and possibly also a getFoo() function, both of which handle the type used in the animation declaration. If all of this is true, then the animation will be able to find those setter/getter functions on the object and set values during the animation. If the functions do not exist, then the animation will fail at runtime, since it will be unable to locate the functions it needs. (Note to users of ProGuard, or other code-stripping utilities: If your setter/getter functions are not used anywhere else in the code, make sure you tell the utility to leave the functions there, because otherwise they may get stripped out. The binding during animation creation is very loose and these utilities have no way of knowing that these functions will be required at runtime.)

View properties

The observant reader, or at least the ones that have not yet browsed on to some other article, may have pinpointed a flaw in the system thus far. If the new animation framework revolves around animating properties, and if animations will be used to animate, to a large extent, View objects, then how can they be used against the View class, which exposes none of its properties through set/get functions?

Excellent question: you get to advance to the bonus round and keep reading.

The way it works is that we added new properties to the View class in Honeycomb. The old animation system transformed and faded View objects by just changing the way that they were drawn. This was actually functionality handled in the container of each View, because the View itself had no transform properties to manipulate. But now it does: we've added several properties to View to make it possible to animate Views directly, allowing you to not only transform the way a View looks, but to transform its actual location and orientation. Here are the new properties in View that you can set, get and animate directly:

  • translationX and translationY: These properties control where the View is located as a delta from its left and top coordinates which are set by its layout container. You can run a move animation on a button by animating these, like this: ObjectAnimator.ofFloat(view, "translationX", 0f, 100f);.
  • rotation, rotationX, and rotationY: These properties control the rotation in 2D (rotation) and 3D around the pivot point.
  • scaleX and scaleY: These properties control the 2D scaling of a View around its pivot point.
  • pivotX and pivotY: These properties control the location of the pivot point, around which the rotation and scaling transforms occur. By default, the pivot point is centered at the center of the object.
  • x and y: These are simple utility properties to describe the final location of the View in its container, as a sum of the left/top and translationX/translationY values.
  • alpha: This is my personal favorite property. No longer is it necessary to fade out an object by changing a value on its transform (a process which just didn't seem right). Instead, there is an actual alpha value on the View itself. This value is 1 (opaque) by default, with a value of 0 representing full transparency (i.e., it won't be visible). To fade a View out, you can do this: ObjectAnimator.ofFloat(view, "alpha", 0f);

Note that all of the "properties" described above are actually available in the form of set/get functions (e.g., setRotation() and getRotation() for the rotation property). This makes them both possible to access from the animation system and (probably more importantly) likely to do the right thing when changed. That is, you don't want to scale an object and have it just sit there because the system didn't know that it needed to redraw the object in its new orientation; each of the setter functions takes care to run the appropriate invalidation step to make the rendering work correctly.

AnimatorSet

This class, like the previous AnimationSet, exists to make it easier to choreograph multiple animations. Suppose you want several animations running in tandem, like you want to fade out several views, then slide in other ones while fading them in. You could do all of this with separate animations and either manually starting the animations at the right times or with startDelays set on the various delayed animations. Or you could use AnimatorSet to do all of that for you. AnimatorSet allows you to animations that play together, playTogether(Animator...), animations that play one after the other, playSequentially(Animator...), or you can organically build up a set of animations that play together, sequentially, or with specified delays by calling the functions in the AnimatorSet.Builder class, with(), before(), and after(). For example, to fade out v1 and then slide in v2 while fading it, you could do something like this:

    ObjectAnimator fadeOut = ObjectAnimator.ofFloat(v1, "alpha", 0f);
ObjectAnimator mover = ObjectAnimator.ofFloat(v2, "translationX", -500f, 0f);
ObjectAnimator fadeIn = ObjectAnimator.ofFloat(v2, "alpha", 0f, 1f);
AnimatorSet animSet = new AnimatorSet().play(mover).with(fadeIn).after(fadeOut);;
animSet.start();

Like ValueAnimator and ObjectAnimator, you can create AnimatorSet objects in XML resources as well.

TypeEvaluator

I wanted to talk about just one more thing, and then I'll leave you alone to explore the code and play with the API demos. The last class I wanted to mention is TypeEvaluator. You may not use this class directly for most of your animations, but you should that it's there in case you need it. As I said earlier, the system knows how to animate float and int values, but otherwise it needs some help knowing how to interpolate between the values you give it. For example, if you want to animate between the Point values in one of the examples above, how is the system supposed to know how to interpolate the values between the start and end points? Here's the answer: you tell it how to interpolate, using TypeEvaluator.

TypeEvaluator is a simple interface that you implement that the system calls on each frame to help it calculate an animated value. It takes a floating point value which represents the current elapsed fraction of the animation and the start and end values that you supplied when you created the animation and it returns the interpolated value between those two values at that fraction. For example, here's the built-in FloatEvaluator class used to calculate animated floating point values:

    public class FloatEvaluator implements TypeEvaluator {
public Object evaluate(float fraction, Object startValue, Object endValue) {
float startFloat = ((Number) startValue).floatValue();
return startFloat + fraction * (((Number) endValue).floatValue() - startFloat);
}
}

But how does it work with a more complex type? For an example of that, here is an implementation of an evaluator for the Point class, from our earlier example:

    public class PointEvaluator implements TypeEvaluator {
public Object evaluate(float fraction, Object startValue, Object endValue) {
Point startPoint = (Point) startValue;
Point endPoint = (Point) endValue;
return new Point(startPoint.x + fraction * (endPoint.x - startPoint.x),
startPoint.y + fraction * (endPoint.y - startPoint.y));
}
}

Basically, this evaluator (and probably any evaluator you would write) is just doing a simple linear interpolation between two values. In this case, each 'value' consists of two sub-values, so it is linearly interpolating between each of those.

You tell the animation system to use your evaluator by either calling the setEvaluator() method on ValueAnimator or by supplying it as an argument in the Object version of the factory method. To continue our earlier example animating Point values, you could use our new PointEvaluator class above to complete that code:

    Point p0 = new Point(0, 0);
Point p1 = new Point(100, 200);
ValueAnimator anim = ValueAnimator.ofObject(new PointEvaluator(), p0, p1);

One of the ways that you might use this interface is through the ArgbEvaluator implementation, which is included in the Android SDK. If you animate a color property, you will probably either use this evaluator automatically (which is the case if you create an animator in an XML resource and supply colors as values) or you can set it manually on the animator as described in the previous section.

But Wait, There's More!

There's so much more to the new animation system that I haven't gotten to. There's the repetition functionality, the listeners for animation lifecycle events, the ability to supply multiple values to the factory methods to get animations between more than just two endpoints, the ability to use the Keyframe class to specify a more complex time/value sequence, the use of PropertyValuesHolder to specify multiple properties to animate in parallel, the LayoutTransition class for automating simple layout animations, and so many other things. But I really have to stop writing soon and get back to working on the code. I'll try to post more articles in the future on some of these items, but also keep an eye on my blog at graphics-geek.blogspot.com for upcoming articles, tutorials, and videos on this and related topics. Until then, check out the API demos, read the overview of Property Animation posted with the 3.0 SDK, dive into the code, and just play with it.

Wednesday, February 23, 2011

Best Practices for Honeycomb and Tablets

The first tablets running Android 3.0 (“Honeycomb”) will be hitting the streets on Thursday Feb. 24th, and we’ve just posted the full SDK release. We encourage you to test your applications on the new platform, using a tablet-size AVD.

Developers who’ve followed the Android Framework’s guidelines and best practices will find their apps work well on Android 3.0. This purpose of this post is to provide reminders of and links to those best practices.

Moving Toward Honeycomb

There’s a comprehensive discussion of how to work with the new release in Optimizing Apps for Android 3.0. The discussion includes the use of the emulator; most developers, who don’t have an Android tablet yet, should use it to test and update their apps for Honeycomb.

While your existing apps should work well, developers also have the option to improve their apps’ look and feel on Android 3.0 by using Honeycomb features; for example, see The Android 3.0 Fragments API. We’ll have more on that in this space, but in the meantime we recommend reading Strategies for Honeycomb and Backwards Compatibility for advice on adding Honeycomb polish to existing apps.

Specifying Features

There have been reports of apps not showing up in Android Market on tablets. Usually, this is because your application manifest has something like this:

<uses-feature android:name="android.hardware.telephony" />

Many of the tablet devices aren’t phones, and thus Android Market assumes the app is not compatible. See the documentation of <uses-feature>. However, such an app’s use of the telephony APIs might well be optional, in which case it should be available on tablets. There’s a discussion of how to accomplish this in Future-Proofing Your App and The Five Steps to Future Hardware Happiness.

Rotation

The new environment is different from what we’re used to in two respects. First, you can hold the devices with any of the four sides up and Honeycomb manages the rotation properly. In previous versions, often only two of the four orientations were supported, and there are apps out there that relied on this in ways that will break them on Honeycomb. If you want to stay out of rotation trouble, One Screen Turn Deserves Another covers the issues.

The second big difference doesn’t have anything to do with software; it’s that a lot of people are going to hold these things horizontal (in “landscape mode”) nearly all the time. We’ve seen a few apps that have a buggy assumption that they’re starting out in portrait mode, and others that lock certain screens into portrait or landscape but really shouldn’t.

A Note for Game Developers

A tablet can probably provide a better game experience for your users than any handset can. Bigger is better. It’s going to cost you a little more work than developers of business apps, because quite likely you’ll want to rework your graphical assets for the big screen.

There’s another issue that’s important to game developers: Texture Formats. Read about this in Game Development for Android: A Quick Primer, in the section labeled “Step Three: Carefully Design the Best Game Ever”.

We've also added a convenient way to filter applications in Android Market based on the texture formats they support; see the documentation of <supports-gl-texture> for more details.

Happy Coding

Once you’ve held one of the new tablets in your hands, you’ll want to have your app not just running on it (which it probably already does), but expanding minds on the expanded screen. Have fun!

Android Gingerbread for Nexus One

Two months after Android Gingerbread was released, Nexus One users can finally update their phones to the latest Android version. "Gingerbread (Android 2.3.3) update now rolling out to Nexus S and Nexus One. Be patient, may take a few weeks for OTA to complete," informs Google. Ry Guy explains that Google "sends out OTA updates (...) incrementally to ensure that everything is going smoothly".

The good news is that Nexus One is the second Android phone updated to Gingerbread and it's likely that the feedback from Nexus S users helped Google fix the most important bugs. Unfortunately, Google is caught between releasing the Android version for tablets, continuing to improve Gingerbread, developing new Android apps and services, improving the Android Market, so the delays are inevitable.


{ via Android Spin }

Tuesday, February 22, 2011

Final Android 3.0 Platform and Updated SDK Tools


We are pleased to announce that the full SDK for Android 3.0 is now available to developers. The APIs are final, and you can now develop apps targeting this new platform and publish them to Android Market. The new API level is 11.

For an overview of the new user and developer features, see the Android 3.0 Platform Highlights.

Together with the new platform, we are releasing updates to our SDK Tools (r10) and ADT Plugin for Eclipse (10.0.0). Key features include:

  • UI Builder improvements in the ADT Plugin:
    • New Palette with categories and rendering previews. (details)
    • More accurate rendering of layouts to more faithfully reflect how the layout will look on devices, including rendering status and title bars to more accurately reflect screen space actually available to applications.
    • Selection-sensitive action bars to manipulate View properties.
    • Zoom improvements (fit to view, persistent scale, keyboard access) (details).
    • Improved support for <merge> layouts, as well as layouts with gesture overlays.
  • Traceview integration for easier profiling from ADT. (details)
  • Tools for using the Renderscript graphics engine: the SDK tools now compiles .rs files into Java Programming Language files and native bytecode.

To get started developing or testing applications on Android 3.0, visit the Android Developers site for information about the Android 3.0 platform, the SDK Tools, and the ADT Plugin.

Monday, February 21, 2011

Open Gmail's PDF Attachments in Google Docs Viewer

A recent Gmail update changed the "View" links for PDF attachments, but only if you use Google Chrome. Instead of opening PDF files using Google Docs Viewer, Gmail now uses the PDF plugin included in Google Chrome. Unfortunately, this makes it more difficult to save PDF files to Google Docs.

Here's a simple trick that lets you open a PDF attachment in Google Docs Viewer. Click "View" next to the attachment and edit the URL: replace "view=att" with "view=gvatt" in the address bar. Another option is to right-click "View", copy the URL, paste in the address bar and replace "view=att" with "view=gvatt".


Obviously, you can also disable the built-in PDF plugin. Just type about:plugins in the address bar and click "Disable" next to "Chrome PDF Viewer".

Sunday, February 20, 2011

7 Chrome Annoyances and How to Fix Them

Guest post by Shankar Ganesh

Google Chrome was released more than two years ago and it's the browser of choice for many people. Despite having won hearts for its speed and elegance, Google Chrome does have some minor flaws that you might want to fix. Here are some of them:

1. No confirmation when closing multiple tabs

Google Chrome does't show a warning when you close a window with multiple tabs. If you accidentally close Chrome windows, you can install Chrome Toolbox. The next time you close many tabs, you'll at least get a warning.


2. Basic history page

Google Chrome's history page is pretty basic and you can't restrict the list to a certain time interval.

The History 2 extension comes to the rescue by allowing you to sort web pages based on the day/week you visited them. History 2 allows you to delete multiple items from your history page at the click of a button – something that's not possible by default.


3. Missing image properties

There's no way to quickly examine an image when you're in Chrome. Fortunately, you can install Image Properties Context Menu, an extension that lets you right-click on an image and find information about the image size, location, dimensions and more.


4. No support for feeds

Chrome simply doesn't recognize RSS feeds and all you get is a page with gibberish text. If you install the RSS Subscription extension developed by Google, you can quickly subscribe to any feed using Google Reader, iGoogle, Bloglines or My Yahoo.


5. You can't send a web page by email

While other popular browsers allow you to quickly send any web page you're viewing by email, such an option is nowhere to be found in Google Chrome.

Worry not, because you can create a simple Javascript bookmarklet to open your default email program with the current URL. If Gmail is what you use, you can alternatively install the Send from Gmail extension to send the web page to Gmail.

6. No session manger

Closing Google Chrome and reopening it does not restore previously opened tabs. In order to do that, go to the Options dialog and enable Reopen tabs that were open last.

If you want advanced session saving options like the ability to create multiple sessions, try the Session Buddy addon for Google Chrome.

7. You can't switch to a tab from the Omnibox

Firefox 4 lets you switch to any open tab by typing relevant words into the address bar. If you'd like to see a similar feature in Chrome, install the Switch To Tab extension.

The next time you have too many open tabs, just type sw <TAB> followed by some words from the page. Hitting Enter switches to the tab that's listed as the first match.



Have you ever wanted to switch from Chrome to another browser because of a missing feature? Did you mange to find a workaround or an extension that adds the missing feature?




Shankar is a blogger and an engineering student from India who writes tech tips at KillerTechTips.com. His latest articles helped users block Facebook and improve productivity in Google Chrome. This post was inspired by an article written by Amit Agarwal.

Friday, February 18, 2011

More File Formats in Google Docs Viewer

Google Docs Viewer added support for a lot of new file formats. You can now use it to open Microsoft Excel spreadsheets, Microsoft PowerPoint presentations from Office 2007 and Office 2010, Apple Pages files, PostScript documents, Microsoft XPS documents, TrueType fonts, graphics from Adobe Illustrator, Adobe Photoshop, Autodesk AutoCad and SVG files.

"Not only does this round out support for the major Microsoft Office file types (we now support DOC, DOCX, PPT, PPTX, XLS and XLSX), but it also adds quick viewing capabilities for many of the most popular and highly-requested document and image types," informs Google.

Google Docs Viewer is integrated with Gmail and Google Docs, so you can now open many Gmail attachments and Google Docs files without installing additional software.



Thursday, February 17, 2011

Google Social Search, a Recommendation Engine

Google Social Search is not a new feature, but it wasn't that important until now. Google used to display at the bottom of the search results page a few links to pages created or recommended by your friends and social connections. The feature automatically obtained data from Google Reader, Google Buzz, Gmail Contacts, Twitter and other sites linked from your Google profile.

Google's blog announced that Social Search will be used to enhance Google results and will become a ranking signal. Social Search borrowed Hotpot's interface that annotates results with messages like "Dan rated this place 5 stars", so you can see why a page ranks so high.


"Social search results will now be mixed throughout your results based on their relevance (in the past they only appeared at the bottom). This means you'll start seeing more from people like co-workers and friends, with annotations below the results they've shared or created. So if you're thinking about climbing Mt. Kilimanjaro and your colleague Matt has written a blog post about his own experience, then we'll bump up that post with a note and a picture," explains Google.

Sometimes a web page is more valuable if it has been recommended by a friend because you probably trust that person. Google uses data from your Google account or publicly available data to generate a list of social connections, but you can't highlight the people you trust or customize the list. What you can do is to add links to your Google profile and to import data that's not publicly available. The Google Accounts page will include an option that lets connect your accounts from services like LinkedIn and import your contacts.

Gmail Opens PDF Attachments Using Chrome's Viewer

If you use Google Chrome and you haven't disabled the built-in PDF plugin, you can now open PDF attachments from Gmail using your browser's viewer. Just click "View" next to the attachment and you'll notice that the PDF file opens faster and it looks much better.


If you disable the plugin or you use a different browser, Gmail continues to open PDF attachments using the Google Docs Viewer. Maybe Gmail should also detect Adobe Reader's plugin and use it instead of the online PDF viewer.

Google Apps blog informs that this feature will be available in Google Apps next week. You can get it faster by enabling "pre-release features" in the Administrator Control Panel.

Wednesday, February 16, 2011

Google's New Navigation Bar, Publicly Available

The new navigation bar is slowly rolled out to all Google users. After more than 6 months of testing, the new navigation bar removes the clutter by grouping extraneous links in a menu inspired by Google Chrome. It also removes link underlining and replaces it with a colored bar. There's more spacing between the links, so the new navigation bar works better on a touchscreen device.



Another change is that Google shows your name instead of your email address. For some reason, Google doesn't link to the Google Profile and makes it more difficult to switch to a different account if you use multiple sign-in or Gmail delegation. Now you need to click "Switch account" to see the list of accounts you can use.


Unfortunately, Google didn't manage to add the bar to all its services, so you'll only see it if you use Google Web Search, Google Image Search, Google Realtime Search, Google Maps and Gmail.

{ Thanks, Benjamin and Locutus.}

Google One Pass

Google launched a service that allows publishers to manage paid content and subscriptions. Google One Pass is a "payment system that enables publishers to set the terms for access to their digital content". Once you pay to access some content, you should be able to read it from a computer, a tablet, a mobile phone, even if you're using a browser or a different app.

Google One Pass tries to be flexible and easy to be implemented. "Publishers have control over how users can pay to access content and set their own prices. They can sell subscriptions of any length with auto-renewal, day passes (or other durations), individual articles or multiple-issue packages. Google One Pass also enables metered models, where a publisher can provide some content or a certain number of visits for free, but can charge frequent visitors or those interested in premium content based on the business model that the publisher prefers."


The service uses Google Checkout to handle payments and it's only available for publishers in Canada, France, Germany, Italy, Spain, UK and US. It's not clear if Google One Pass will integrate with Android's in-app payments. At the moment, the transaction fee for in-app purchases is 30%.

Apple has recently announced a subscription service for the App Store that uses the same revenue share from in-app purchases. "All we require is that, if a publisher is making a subscription offer outside of the app, the same (or better) offer be made inside the app, so that customers can easily subscribe with one-click right in the app." The transaction fee is way too high and hopefully Google won't make the same mistake.

More About Google's Reading Level Filter

Google's Daniel M. Russell has more information about the reading level filter, a feature recently added to the advanced search page.

The reading-level is based primarily on statistical models we built with the help of teachers. We paid teachers to classify pages for different reading levels, and then took their classifications to build a model of the intrinsic complexity of the text. (...) We also used data from Google Scholar, since most of the articles in Scholar are considered advanced.

So the breakdown isn't grade- or age-specific, but reflects the judgments of teachers as to overall level of difficulty. Roughly speaking, "Basic" is elementary level texts, while "Intermediate" is anything above that level up to technical and scholarly articles, a la the articles you'd find in Scholar.

That's not exact, but it's a fairly robust model that works across a wide variety of different text styles and web pages.


Unfortunately, the feature only works for English and it's probably difficult to add support for other languages.

Monday, February 14, 2011

Block Domains from Google's Search Results

Google has released a Chrome extension that lets you block domains and sundomains from Google's results. If you never find the results from experts-exchange.com useful, you can now click "Block experts-exchange.com" next to a search result from this site and you'll add the domain to your personal blacklist.


Unfortunately, the extension does little more than storing a list of domains on your computer and hiding the results from those domains. It's not tied to a web service and the blacklist is not saved to your Google account, so that you could use it from a different computer or another browser.

Matt Cutts says that the list of domains you've blocked is sent to Google. "We will study the resulting feedback and explore using it as a potential ranking signal for our search results."

Google SearchWiki used to offer a similar feature, but you could only use it to hide certain results. Blocking domains is more powerful and it will be interesting to see if it will become a regular Google search feature. I think it's too powerful and it might lead to unintended consequences: for example, some users might hide a domain just because a web page is not very helpful.

Default HTTPS Access for Picasa Web Albums

Last month, Picasa Web Albums started to support HTTPS and now it's enabled by default. It's probably the only popular photo sharing site that uses an encrypted connection by default and that's really impressive.


Picasa Web Albums is not the only Google service that has recently switched to HTTPS. Google Calendar, Google Docs, Google Sites are three other services that only use encrypted connections. You no longer have worry about editing the URL and replacing "http" with "https" because Google automatically redirects URLs to HTTPS.

Wednesday, February 9, 2011

Introducing Renderscript

[This post is by R. Jason Sams, an Android engineer who specializes in graphics, performance tuning, and software architecture. —Tim Bray]

Renderscript is a key new Honeycomb feature which we haven’t yet discussed in much detail. I will address this in two parts. This post will be a quick overview of Renderscript. A more detailed technical post with a simple example will be provided later.

Renderscript is a new API targeted at high-performance 3D rendering and compute operations. The goal of Renderscript is to bring a lower level, higher performance API to Android developers. The target audience is the set of developers looking to maximize the performance of their applications and are comfortable working closer to the metal to achieve this. It provides the developer three primary tools: A simple 3D rendering API on top of hardware acceleration, a developer friendly compute API similar to CUDA, and a familiar language in C99.

Renderscript has been used in the creation of the new visually-rich YouTube and Books apps. It is the API used in the live wallpapers shipping with the first Honeycomb tablets.

The performance gain comes from executing native code on the device. However, unlike the existing NDK, this solution is cross-platform. The development language for Renderscript is C99 with extensions, which is compiled to a device-agnostic intermediate format during the development process and placed into the application package. When the app is run, the scripts are compiled to machine code and optimized on the device. This eliminates the problem of needing to target a specific machine architecture during the development process.

Renderscript is not intended to replace the existing high-level rendering APIs or languages on the platform. The target use is for performance-critical code segments where the needs exceed the abilities of the existing APIs.

It may seem interesting that nothing above talked about running code on CPUs vs. GPUs. The reason is that this decision is made on the device at runtime. Simple scripts will be able to run on the GPU as compute workloads when capable hardware is available. More complex scripts will run on the CPU(s). The CPU also serves as a fallback to ensure that scripts are always able to run even if a suitable GPU or other accelerator is not present. This is intended to be transparent to the developer. In general, simpler scripts will be able to run in more places in the future. For now we simply leverage the CPU resources and distribute the work across as many CPUs as are present in the device.


The video above, captured through an Android tablet’s HDMI out, is an example of Renderscript compute at work. (There’s a high-def version on YouTube.) In the video we show a simple brute force physics simulation of around 900 particles. The compute script runs each frame and automatically takes advantage of both cores. Once the physics simulation is done, a second graphics script does the rendering. In the video we push one of the larger balls to show the interaction. Then we tilt the tablet and let gravity do a little work. This shows the power of the dual A9s in the new Honeycomb tablet.

Renderscript Graphics provides a new runtime for continuously rendering scenes. This runtime sits on top of HW acceleration and uses the developers’ scripts to provide custom functionality to the controlling Dalvik code. This controlling code will send commands to it at a coarse level such as “turn the page” or “move the list”. The commands the two sides speak are determined by the scripts the developer provides. In this way it’s fully customizable. Early examples of Renderscript graphics were the live wallpapers and 3d application launcher that shipped with Eclair.

With Honeycomb, we have migrated from GL ES 1.1 to 2.0 as the renderer for Renderscript. With this, we have added programmable shader support, 3D model loading, and much more efficient allocation management. The new compiler, based on LLVM, is several times more efficient than acc was during the Eclair-through-Gingerbread time frame. The most important change is that the Renderscript API and tools are now public.

The screenshot above was taken from one of our internal test apps. The application implements a simple scene-graph which demonstrates recursive script to script calling. The Androids are loaded from an A3D file created in Maya and translated from a Collada file. A3D is an on device file format for storing Renderscript objects.

Later we will follow up with more technical information and sample code.

Android 2.3.3 Platform, New NFC Capabilities

Several weeks ago we released Android 2.3, which introduced several new forms of communication for developers and users. One of those, Near Field Communications (NFC), let developers get started creating a new class of contactless, proximity-based applications for users.

NFC is an emerging technology that promises exciting new ways to use mobile devices, including ticketing, advertising, ratings, and even data exchange with other devices. We know there’s a strong interest to include these capabilities into many applications, so we’re happy to announce an update to Android 2.3 that adds new NFC capabilities for developers. Some of the features include:

  • A comprehensive NFC reader/writer API that lets apps read and write to almost any standard NFC tag in use today.
  • Advanced Intent dispatching that gives apps more control over how/when they are launched when an NFC tag comes into range.
  • Some limited support for peer-to-peer connection with other NFC devices.

We hope you’ll find these new capabilities useful and we’re looking forward to seeing the innovative apps that you will create using them.

Android 2.3.3 is a small feature release that includes a new API level, 10.
Going forward, we expect most devices shipping with an Android 2.3 platform to run Android 2.3.3 (or later). For an overview of the API changes, see the Android 2.3.3 Version Notes. The Android 2.3.3 SDK platform for development and testing is available through the Android SDK Manager.

Google Tests a Navigation Bar Integrated with Google Profiles

Google has been testing different versions of a new navigation bar that removes link underlining and adds a menu for the features that now clutter the bar.

The latest iteration of Google's experiment replaces your email address with your name and shows the photo from your Google Profile. Right now, creating a Google Profile is optional, but I wouldn't be surprised to see that this will change. If there's one thing that unifies almost all Google services, that's the navigation bar and it makes sense to add social features to the persistent bar.



{ Thanks, Aaron and Ameet. }

Google Instant Supports Search Operators

When Google Instant was launched, many power users noticed that they need to press Enter after typing a query that included advanced search operators like site: and filetype:. Most likely, these queries are resource intensive and it's difficult to return the results very fast.

Now you can use these operators without having to press Enter or click the search button. The main advantage is that you can adjust your query and see the results as you type. Unfortunately, the results aren't displayed instantly.

Google's Interactive Doodle for Jules Verne's Birthday

Google's doodles are now a playground for creating small web apps. Static images are just the starting point for interactive apps that automatically load when you visit Google's homepage. Pac-Man, Isaac Newton, John Lennon and the particles doodle are some of the interactive doodles that surprised many Google users.

Yesterday's doodle celebrated Jules Verne's birthday and managed to use some clever animations without being annoying. "[The] doodle, celebrating Verne's 183rd birthday, tries to capture that sense of adventure and exploration. Using CSS3 (and with help from our resident tech wizards Marcin Wichary and Kris Hom), the doodle enables anyone to navigate the Nautilus (nearly) 20,000 leagues with the simple pull of a lever. And for those using devices with built-in accelerometers and the latest versions of Google Chrome or Firefox, it's even simpler — just tilt your device in the direction you want to explore and the Nautilus will follow," explained Google.



If you missed the doodle, you can now see a bigger version of the mini-app. It's a good opportunity to check if you have a fast browser and to use the "zoom out" feature of your browser.

{ via Google's Twitter account }

Google Translate App for iPhone

Yet another Google app initially developed for Android makes its way onto the Apple App Store: Google Translate. It doesn't have all the features of the Android app: there's no conversation mode, no SMS translation, Google Suggest or a list of related phrases. Another issue is that the font size is way too big.

The application has a feature that's not available in the Android app: full-screen mode, so it doesn't make sense to use a such a big font size which is not even configurable. Google says that "the ability to easily enlarge the translated text to full-screen size" makes it "much easier to read the text on the screen, or show the translation to the person you are communicating with. Just tap on the zoom icon to quickly zoom in."


Why would you use the native application instead of visiting translate.google.com? The native application supports voice input for 15 languages, text-to-speech for 23 languages and it's better suited for quickly switching between multiple languages.

{ via Google Mobile Blog }

Thursday, February 3, 2011

9 Things to Try in Google Chrome 9

Google Chrome 9 is now available, two months after the previous release and two weeks later than Google's self-imposed deadline. Here are 9 features you should try in this new version:

1. WebGL is now enabled by default in Google Chrome and you can try the 3D web apps from Google's gallery. Don't miss Body Browser, a Google Earth for the human body, and the WebGL Aquarium.


2. Google Instant is now integrated with Chrome's address bar, but this feature is not for everyone because it automatically loads web pages as you type. It's disabled by default, so you need to enable it by checking "Enable Instant for faster searching and browsing" in the Options dialog.

3. Cloud Print can be enabled from Options > Under the hood if you use Windows. This features lets you print from devices that can't communicate directly with printers. The first two applications that use Cloud Print are the mobile versions of Gmail and Google Docs.

4. Chrome supports WebP files. WebP is a new image format created by Google whose main advantage is that it offers better compression. "Our team focused on improving compression of the lossy images, which constitute the larger percentage of images on the web today. To improve on the compression that JPEG provides, we used an image compressor based on the VP8 codec that Google open-sourced in May 2010." Here's an example of WebP image.

5. Right-click on an extension button next to the address bar and select "Hide button". When you change your mind, go to Tools > Extensions and click on "Show button" next to the corresponding extension.


6. Create desktop shortcuts for your web apps: right-click on an app in the new tab page and select "create shortcut". You can also add shortcuts to the Start Menu and the Quick Launch Bar if you use Windows.


7. Launch web apps in a new window. Right-click on a web app and select "open as window".

8. Install extensions that add custom menu options to images. For example, install Clip It Good to upload any image from a web page to Picasa Web Albums.


9. Install extensions that use the Omnibox API to associate keywords with new search engines. For example, install the DOI Resolver extension and type doi 10.1205/096030802760309188 in the address bar. The extension added a new search engine and associated it with the keyword doi.

Blogger's Android App

Blogger is catching up with the times: Android users can finally post timely updates to their blogs using a native app. You can always use Blogger's site or even write your posts in a mail client, but a mobile app is more user friendly.

Blogger's Android app is really basic and doesn't offer too many features. It's mostly useful if you want to write a new post, since you can't edit the existing posts. The editor only lets you enter text and include one or more photos. You can add some labels and geotag your posts, taking advantage of your phone's GPS. If you haven't finished a post, you can always save it as a draft, but you won't be able to publish it from a computer because it's only saved locally.


Blogger's blog mentions that Blogger is a new sharing option, so you can easily share a photo from the Gallery or a web page. "By switching to the List View, you can view all your drafts and published posts that you wrote using the app." Unfortunately, you can't edit existing posts.

All in all, Blogger's Android app offers very few features and I would only use it to write short posts or to share photos from a trip. Maybe Blogger's team should also develop a mobile web app which could be updated faster.

Android Market link: Blogger's app.