Lorem Ipsum Dolor Sit Amet.
Mauris vitae nisl nec metus placerat consectetuer.
Nam bibendum dadn nulla tortor elementum ipsum

Flickr

Friday, May 7, 2010 5:19 PM By Dhina

This is a test post from flickr, a fancy photo sharing thing.

Purest form of Cloud Computing

Thursday, October 2, 2008 12:35 AM By Dhina

"Real cloud computing is when you're working with a provider where virtually every resource they offer is unlimited, and you're able to scale up to infinity on each of those resources. You can write applications without worrying about running out of anything. And usually you're paying for only what you use (or in the case of Google, you get a lot for free as well)."

I remember this reading somewhere.. out in the clouds, you have unlimited and enormous power for you in the server which your code can leverage without any struggle. Amazon has the purest form of cloud computing server. As the name suggest EC2 (Elastic cloud computing), the server has the power which will be provided to you. If your app needs more power to spawn the threads or do some super calculations, the server uses its elasticity to automatically increase the power which it initially has and fuel your app. But pay for what you use. ...

Sunday, September 21, 2008 6:09 AM By Dhina

This site shows the evolution of the gmail chat window.
They have come up with the chat window in the gmail undergoing a tough reviews and feedbacks.

Amazon's Simple DB Beta

Saturday, September 20, 2008 11:52 PM By Dhina

Amazon has introduced a real cool thing which works in conjunction with S3 and EC2 is Simple DB.

Imagine a startup company investing hell lotta money in buying a Data base, employing a person to administer it, performance tune it, store the data and properly index it.
So, whats their exact business? are you going off-track on their business? .. unfortunately yes.
So, this is the win-win situation for both amazon and the guys out there have a start up with them.
They will give the structured data to amazon, and the responsibility dies out there.

Its amazon's responsibility to maintain and manage the database. The company can retrieve the data on the user click basis and pay for what they had used. is nt it cool??

the storage of data will be schema less. The amazon is going to index everything and make it accessible safely.

More here

Amazon S3

10:38 PM By Dhina

Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers.

Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers.

Amazon S3 is intentionally built with a minimal feature set.

  • Write, read, and delete objects containing from 1 byte to 5 gigabytes of data each. The number of objects you can store is unlimited.
  • Each object is stored in a bucket and retrieved via a unique, developer-assigned key.
  • A bucket can be located in the United States or in Europe. All objects within the bucket will be stored in the bucket's location, but the objects can be accessed from anywhere.
  • Authentication mechanisms are provided to ensure that data is kept secure from unauthorized access. Objects can be made private or public, and rights can be granted to specific users.
  • Uses standards-based REST and SOAP interfaces designed to work with any Internet-development toolkit.
  • Built to be flexible so that protocol or functional layers can easily be added. Default download protocol is HTTP. A BitTorrent(TM) protocol interface is provided to lower costs for high-scale distribution. Additional interfaces will be added in the future.
  • Reliability backed with the Amazon S3 Service Level Agreement.

Amazon S3 is based on the idea that quality Internet-based storage should be taken for granted. It helps free developers from worrying about how they will store their data, whether it will be safe and secure, or whether they will have enough storage available. It frees them from the upfront costs of setting up their own storage solution as well as the ongoing costs of maintaining and scaling their storage servers. The functionality of Amazon S3 is simple and robust: Store any amount of data inexpensively and securely, while ensuring that the data will always be available when you need it. Amazon S3 enables developers to focus on innovating with data, rather than figuring out how to store it.

Amazon S3 was built to fulfill the following design requirements:

  • Scalable: Amazon S3 can scale in terms of storage, request rate, and users to support an unlimited number of web-scale applications. It uses scale as an advantage: Adding nodes to the system increases, not decreases, its availability, speed, throughput, capacity, and robustness.

  • Reliable: Store data durably, with 99.99% availability. There can be no single points of failure. All failures must be tolerated or repaired by the system without any downtime.

  • Fast: Amazon S3 must be fast enough to support high-performance applications. Server-side latency must be insignificant relative to Internet latency. Any performance bottlenecks can be fixed by simply adding nodes to the system.

  • Inexpensive: Amazon S3 is built from inexpensive commodity hardware components. As a result, frequent node failure is the norm and must not affect the overall system. It must be hardware-agnostic, so that savings can be captured as Amazon continues to drive down infrastructure costs.

  • Simple: Building highly scalable, reliable, fast, and inexpensive storage is difficult. Doing so in a way that makes it easy to use for any application anywhere is more difficult. Amazon S3 must do both.

A forcing-function for the design was that a single Amazon S3 distributed system must support the needs of both internal Amazon applications and external developers of any application. This means that it must be fast and reliable enough to run Amazon.com's websites, while flexible enough that any developer can use it for any data storage need.

Amazon S3 Design Principles

The following principles of distributed system design were used to meet Amazon S3 requirements:

  • Decentralization: Use fully decentralized techniques to remove scaling bottlenecks and single points of failure.

  • Asynchrony: The system makes progress under all circumstances.

  • Autonomy: The system is designed such that individual components can make decisions based on local information.

  • Local responsibility: Each individual component is responsible for achieving its consistency; this is never the burden of its peers.

  • Controlled concurrency: Operations are designed such that no or limited concurrency control is required.

  • Failure tolerant: The system considers the failure of components to be a normal mode of operation, and continues operation with no or minimal interruption.

  • Controlled parallelism: Abstractions used in the system are of such granularity that parallelism can be used to improve performance and robustness of recovery or the introduction of new nodes.

  • Decompose into small well-understood building blocks: Do not try to provide a single service that does everything for everyone, but instead build small components that can be used as building blocks for other services.

  • Symmetry: Nodes in the system are identical in terms of functionality, and require no or minimal node-specific configuration to function.

  • Simplicity: The system should be made as simple as possible (but no simpler).

Article about System Scalability

10:26 PM By Dhina

The scalability is often misunderstood by the developer community. May be this article will explain it better.

Scalability is frequently used as a magic incantation to indicate that something is badly designed or broken. Often you hear in a discussion “but that doesn’t scale” as the magical word to end an argument. This is often an indication that developers are running into situations where the architecture of their system limits their ability to grow their service. If scalability is used in a positive sense it is in general to indicate a desired property as in “our platform needs good scalability”.

What is it that we really mean by scalability? A service is said to be scalable if when we increase the resources in a system, it results in increased performance in a manner proportional to resources added. Increasing performance in general means serving more units of work, but it can also be to handle larger units of work, such as when datasets grow.

In distributed systems there are other reasons for adding resources to a system; for example to improve the reliability of the offered service. Introducing redundancy is an important first line of defense against failures. An always-on service is said to be scalable if adding resources to facilitate redundancy does not result in a loss of performance.

Why is scalability so hard? Because scalability cannot be an after-thought. It requires applications and platforms to be designed with scaling in mind, such that adding resources actually results in improving the performance or that if redundancy is introduced the system performance is not adversely affected. Many algorithms that perform reasonably well under low load and small datasets can explode in cost if either requests rates increase, the dataset grows or the number of nodes in the distributed system increases.

A second problem area is that growing a system through scale-out generally results in a system that has to come to terms with heterogeneity. Resources in the system increase in diversity as next generations of hardware come on line, as bigger or more powerful resources become more cost-effective or when some resources are placed further apart. Heterogeneity means that some nodes will be able to process faster or store more data than other nodes in a system and algorithms that rely on uniformity either break down under these conditions or underutilize the newer resources.

Is achieving good scalability possible? Absolutely, but only if we architect and engineer our systems to take scalability into account. For the systems we build we must carefully inspect along which axis we expect the system to grow, where redundancy is required, and how one should handle heterogeneity in this system, and make sure that architects are aware of which tools they can use for under which conditions, and what the common pitfalls are.