Archive for June, 2010

The 10 biggest falls in Sensex history


It was bloodbath at the Indian stock markets on May 22 like never before. The Sensex — the benchmark 30-share sensitive index of the Bombay Stock Exchange [ Images ] — plummetted by a nerve-rattling 1,111.71 points in intra-day trading. This is the largest fall in a single day ever witnessed in the history of the Indian stock markets.

The bloodbath at the markets ended with the Sensex losing 457 points to close at 10,482.

The Nifty lost 166 points to close at 3,081.

Trading was suspended in the market for an hour as the market has hit the lower 10% circuit.

The crash followed a statement by Central Board of Direct Taxes seeking to dispel suggestions that its draft circular was ambiguous.

Here are the 10 biggest falls in the Indian stock market history:

1. May 18, 2006: The Sensex registered a fall of 826 points (6.76 per cent) to close at 11,391, it’s biggest ever, following heavy selling by FIIs, retail investors and a weakness in global markets.

2. April 28, 1992: The Sensex registered a fall of 570 points (12.77 per cent) to close at 3,870, it’s second-largest, following the coming to light of the Harshad Mehta securities scam.

3. May 17, 2004: Another Monday. Sensex dropped by 565 points, its third biggest fall ever, to close at 4,505. With the NDA out of power and the Left parties, part of the UPA coalition government, flexing their muscle, the Sensex witnessed its second-biggest intra-day fall of 842 points, twice attracting suspension of trading. At close, however, it regained some of its lost ground.

4. May 15, 2006: The market fell by 463 points to 11,822 points.

5. May 22, 2006: Sensex slumped by 457 points to 10,482.

6. May 19, 2006: Sensex slumped by 453 points to 10,939.

7. April 4, 2000: Sensex slumped by 361 points to 4,691.

8. May 12, 1992: Indian stock markets plunged 334 points to fall to 3,086.

9. May 14, 2004: Sensex lost 330 points to fall to 5,070.

10. May 6, 1992: Losing 327 points, the Sensex fell to 3,561 points.

Top 10 companies to work for in India


Top 10 companies to work for in India

Posted -195970 minutes ago by Pritika Ghura
The 2010 edition of ‘India’s Best Companies To Work For’ by The Economic Times has ranked the top 10 companies in India on the basis of management, facilities, infrastructure, growth prospects, etc. After interviewing CEOs, HR heads and employees to gauge what it takes to be the best company to work for, this study reports on how companies have nurtured their human capital in the face of the downturn, while taking some bold initiatives to maintain growth.

Check out the 10 best companies to work for -:

1) Google India: Located in Bangalore and founded in 1998, Google India’s main work is in Online Search, Online Advertising & Online Applications. You cannot pinpoint at one thing that makes Google one of the best places to work for. It is a combination of values and operating principles to make the work more enjoyable and encourage innovations. Commitment towards finding talented people is not solely based on ranks and percentage, but on the inclusion of those who have a sense of mission and derive massive satisfaction from their work.

2) Make MyTrip: Located in Gurgaon and founded in 2000, this company’s main work target is booking airline tickets, hotels, bus and rail ticket and holiday packages. The immense sense of empowerment at MakeMyTrip is drawing in top talent by the droves. For one, Amit Somani, the company’s Chief Products Officer, has no hang-ups about leaving Google to join MakeMyTrip.

3) Intel Technology India: Located in Bangalore this IT giant was founded in 1988, this firm has also tried to ingrain a culture of being open and direct. It is very serious about having an open-door policy and reaching out to seniors is the norm. Coupled with periodic feedback mechanisms, this keeps every Intellite on the ball.

4) Marriot Hotels: Located in Mumbai, one of the best names in hospitality ranks 4th in the best place to work for in India because the company’s employee policy rests on three main legs: an open-door policy, empowerment and fairness.

5) NetApp India: Set foot into NetApp India’s headquarters in Bangalore and chances are ping-pong balls would be zooming past your head, this is a place definitely not for those who consider their work-life as part of a calibrated approach. Also you will never find people rarely complaining about pay and not getting a fair share of profits.

6) American Express: Founded in 2006 in Gurgaon and specializing in financial services, Amex has been working to building a Gen Next workplace as its ‘ecosystem for the future’, which is young, vibrant and diverse in the true sense.

7) NTPC: Located in Delhi with an employee strength of 24,708, the Rs 49,478.86-crore power major employs about 25,000 die-hard loyalists, who take pride in the 35-year-old brand and its empowerment attributes. It is one of the best public enterprises to work for.

8) PayPal India: Located in Chennai and specializing in e commerce, PayPal, which is a part of eBay, believes in empowering technologists, especially women. There is a dedicated group named ‘eBay Women in Technology.’

9) Ajuba Solutions: Located in Chennai and specializing in Healthcare Revenue Cycle Management, this BPO believes in ‘inspired people, inspired results’. Named after the hindi word of miracle, the company tries to live by its credo of “working wonders for our clients and employees.

10) SAS Institute: Located in Mumbai, this company offers a stress free environment and is flexible enough to give employees option to work from home.

How to measure the web traffic


Step 1
If you are the owner of the website, you should install visitor tracking code on your website, such as Google Analytics which is free (see link in Resources at the bottom). It will track how many hits, pages views and unique visitors your website receives daily, weekly and monthly. If you have an e-commerce website, it can also track the conversions and sales amounts. Many other handy charts, graphs and tools are also included. This information is private and only available to you, the account owner. It is best to install the tracking code as soon as possible to start building history. If you are using Google Adwords (you pay to advertise your website) or Google Adsense (you place advertising on your website for profit) then you may already have access to Google Analytics.

Step 2
If you are the owner of the website, another popular website analyzer tool is WebTrends (see link in Resources at the bottom). It goes beyond basic analytics and measures all aspects of the online experience and helps you find ways to improve usability and optimize conversions for page views, paths and scenarios. However, it is not free and instead comes with a hefty price tag.

Step 3
If you are the owner of the website and you would like to share the traffic stats with the public then you can install tracking code on your website, such as StatCounter which is free (see link in Resources at the bottom). It allows you to set the traffic stats to public or private, and it gives you the option to show or hide the hit counter on your website. Finally, your website will be assigned an ID number and a public link that can be given out to certain people which allows them to view your website traffic stats without having to login. Or, you can display the stats right on your webpage for everyone to see. This is useful if you want to prove how well your website is doing, in the case of selling advertising on the website or selling off the entire website. Again, it is important to install tracking code promptly to begin building historical data.

Step 4
There are many cases where you may not be the website owner but want to find out how well another website performs. However, since you do not own the site then you will not be able to see the full-blown traffic stats displayed by Google Analytics. And if you do not know the website owner then you will not know what the public link to their Stat Counter is either. Now it is time to do some investigation.

Step 5
You may be curious about how much traffic any website generates, or you want to find out how well your competitor’s website is performing. The first place to go is TrafficEstimate which is free (see link in Resources at the bottom). Search for a website address to see the estimated number of visitors to the site in the last 30 days, along with a simple graph. For example, Markus Frind, currently the biggest individual Google Adsense publisher, makes over $10,000 a day from his free dating website, Plenty of Fish. If you type in http://www.plentyoffish.com into the Traffic Estimate tool then you can see the website gets over 4.7 million hits per month. Keep in mind that the estimator tool is just that, an estimate. You may need to divide the number by three, or multiply the number by three to estimate the true traffic. At least you can get a ballpark figure.

Step 6
Now for the fun stuff. Alexa is the top web information authority which is free (see link in Resources at the bottom). Search for a website address and see how a website ranks compared to all other websites in the world. For example, search for Yahoo.com or Google.com and click site info/overview then traffic details. You will see their current traffic rank is #1 and #2 in the world, and each consumes an estimated 30% of global traffic. You can search for any website address that you are interested in finding its traffic rank. Alexa also provides other neat graphs and handy info. Although it does not show you the exact number of visitors, you can get a general idea of the website’s performance. By comparing the traffic rank of your website to your competitor’s website, you can estimate who gets more traffic. For example, YouTube.com ranks higher than Weather.com.

Step 7
Finally, if you want to see a wide variety of stats for any website then you can visit dnScoop for free (see link in Resources at the bottom). Search for any website address and you will see the website’s age, page rank, inbound links (referring sites), Alexa traffic rank, and even the estimated monetary value of the website if it were for sale.

Read more: How to Find Out Any Website’s Traffic, Visitors and Hits | eHow.com http://www.ehow.com/how_2320676_any-websites-traffic-visitors-hits.html#ixzz0ejGKd6jj

SEO terms.


Most frequently asked questions
What makes Text Link Ads unique?
Text Link Ads are unique because they are static html links that can drive targeted traffic and help your link popularity which is a top factor in organic search engine rankings.
How are Text Link Ads priced?
Text Link Ads are priced at a flat rate per month per link. You prepay for a 30 day run of your ad, and your account will be set on recurring billing either via PayPal or credit card. Your ad will never be turned off if it gets too many impressions or clicks. Our pricing algorithm factors in a website’s: traffic, theme, ad position, and link popularity when setting the flat ad rate per month.
I just placed an order, when will my ad go live?
Most links are placed within 48 hours of the order being received. New ads require our publisher to review and accept the ad. The 30 day billing period will not begin until your ad is actually live.
What is Alexa?
A website’s “Alexa” ranking is a general indicator of the amount of traffic a website receives. The lower the number the more traffic that site receives, ie Yahoo.com is #1, Ebay.com is #8. More information on Alexa here.
How do I review my active running ads?
You can review your current running ads here

The-software-behind-facebook


Exploring the software behind Facebook, the world’s largest site
Posted in Main on June 18th, 2010 by Pingdom
At the scale that Facebook operates, a lot of traditional approaches to serving web content break down or simply aren’t practical. The challenge for Facebook’s engineers has been to keep the site up and running smoothly in spite of handling close to half a billion active users. This article takes a look at some of the software and techniques they use to accomplish that.

Facebook’s scaling challenge
Before we get into the details, here are a few factoids to give you an idea of the scaling challenge that Facebook has to deal with:

Facebook serves 570 billion page views per month (according to Google Ad Planner).
There are more photos on Facebook than all other photo sites combined (including sites like Flickr).
More than 3 billion photos are uploaded every month.
Facebook’s systems serve 1.2 million photos per second. This doesn’t include the images served by Facebook’s CDN.
More than 25 billion pieces of content (status updates, comments, etc) are shared every month.
Facebook has more than 30,000 servers (and this number is from last year!)
Software that helps Facebook scale
In some ways Facebook is still a LAMP site (kind of), but it has had to change and extend its operation to incorporate a lot of other elements and services, and modify the approach to existing ones.

For example:

Facebook still uses PHP, but it has built a compiler for it so it can be turned into native code on its web servers, thus boosting performance.
Facebook uses Linux, but has optimized it for its own purposes (especially in terms of network throughput).
Facebook uses MySQL, but primarily as a key-value persistent storage, moving joins and logic onto the web servers since optimizations are easier to perform there (on the “other side” of the Memcached layer).
Then there are the custom-written systems, like Haystack, a highly scalable object store used to serve Facebook’s immense amount of photos, or Scribe, a logging system that can operate at the scale of Facebook (which is far from trivial).

But enough of that. Let’s present (some of) the software that Facebook uses to provide us all with the world’s largest social network site.

Memcached
Memcached is by now one of the most famous pieces of software on the internet. It’s a distributed memory caching system which Facebook (and a ton of other sites) use as a caching layer between the web servers and MySQL servers (since database access is relatively slow). Through the years, Facebook has made a ton of optimizations to Memcached and the surrounding software (like optimizing the network stack).

Facebook runs thousands of Memcached servers with tens of terabytes of cached data at any one point in time. It is likely the world’s largest Memcached installation.

HipHop for PHP
PHP, being a scripting language, is relatively slow when compared to code that runs natively on a server. HipHop converts PHP into C++ code which can then be compiled for better performance. This has allowed Facebook to get much more out of its web servers since Facebook relies heavily on PHP to serve content.

A small team of engineers (initially just three of them) at Facebook spent 18 months developing HipHop, and it is now live in production.

Haystack
Haystack is Facebook’s high-performance photo storage/retrieval system (strictly speaking, Haystack is an object store, so it doesn’t necessarily have to store photos). It has a ton of work to do; there are more than 20 billion uploaded photos on Facebook, and each one is saved in four different resolutions, resulting in more than 80 billion photos.

And it’s not just about being able to handle billions of photos, performance is critical. As we mentioned previously, Facebook serves around 1.2 million photos per second, a number which doesn’t include images served by Facebook’s CDN. That’s a staggering number.

BigPipe
BigPipe is a dynamic web page serving system that Facebook has developed. Facebook uses it to serve each web page in sections (called “pagelets”) for optimal performance.

For example, the chat window is retrieved separately, the news feed is retrieved separately, and so on. These pagelets can be retrieved in parallel, which is where the performance gain comes in, and it also gives users a site that works even if some part of it would be deactivated or broken.

Cassandra
Cassandra is a distributed storage system with no single point of failure. It’s one of the poster children for the NoSQL movement and has been made open source (it’s even become an Apache project). Facebook uses it for its Inbox search.

Other than Facebook, a number of other services use it, for example Digg. We’re even considering some uses for it here at Pingdom.

Scribe
Scribe is a flexible logging system that Facebook uses for a multitude of purposes internally. It’s been built to be able to handle logging at the scale of Facebook, and automatically handles new logging categories as they show up (Facebook has hundreds).

Hadoop and Hive
Hadoop is an open source map-reduce implementation that makes it possible to perform calculations on massive amounts of data. Facebook uses this for data analysis (and as we all know, Facebook has massive amounts of data). Hive originated from within Facebook, and makes it possible to use SQL queries against Hadoop, making it easier for non-programmers to use.

Both Hadoop and Hive are open source (Apache projects) and are used by a number of big services, for example Yahoo and Twitter.

Thrift
Facebook uses several different languages for its different services. PHP is used for the front-end, Erlang is used for Chat, Java and C++ are also used in several places (and perhaps other languages as well). Thrift is an internally developed cross-language framework that ties all of these different languages together, making it possible for them to talk to each other. This has made it much easier for Facebook to keep up its cross-language development.

Facebook has made Thrift open source and support for even more languages has been added.

Varnish
Varnish is an HTTP accelerator which can act as a load balancer and also cache content which can then be served lightning-fast.

Facebook uses Varnish to serve photos and profile pictures, handling billions of requests every day. Like almost everything Facebook uses, Varnish is open source.

Other things that help Facebook run smoothly
We have mentioned some of the software that makes up Facebook’s system(s) and helps the service scale properly. But handling such a large system is a complex task, so we thought we would list a few more things that Facebook does to keep its service running smoothly.

Gradual releases and dark launches
Facebook has a system they called Gatekeeper that lets them run different code for different sets of users (it basically introduces different conditions in the code base). This lets Facebook do gradual releases of new features, A/B testing, activate certain features only for Facebook employees, etc.

Gatekeeper also lets Facebook do something called “dark launches”, which is to activate elements of a certain feature behind the scenes before it goes live (without users noticing since there will be no corresponding UI elements). This acts as a real-world stress test and helps expose bottlenecks and other problem areas before a feature is officially launched. Dark launches are usually done two weeks before the actual launch.

Profiling of the live system
Facebook carefully monitors its systems (something we here at Pingdom of course approve of), and interestingly enough it also monitors the performance of every single PHP function in the live production environment. This profiling of the live PHP environment is done using an open source tool called XHProf.

Gradual feature disabling for added performance
If Facebook runs into performance issues, there are a large number of levers that let them gradually disable less important features to boost performance of Facebook’s core features.

The things we didn’t mention
We didn’t go much into the hardware side in this article, but of course that is also an important aspect when it comes to scalability. For example, like many other big sites, Facebook uses a CDN to help serve static content. And then of course there is the huge data center Facebook is building in Oregon to help it scale out with even more servers.

And aside from what we have already mentioned, there is of course a ton of other software involved. However, we hope we were able to highlight some of the more interesting choices Facebook has made.

Facebook’s love affair with open source
We can’t complete this article without mentioning how much Facebook likes open source. Or perhaps we should say, “loves”.

Not only is Facebook using (and contributing to) open source software such as Linux, Memcached, MySQL, Hadoop, and many others, it has also made much of its internally developed software available as open source.

Examples of open source projects that originated from inside Facebook include HipHop, Cassandra, Thrift and Scribe. Facebook has also open-sourced Tornado, a high-performance web server framework developed by the team behind FriendFeed (which Facebook bought in August 2009).

(A list of open source software that Facebook is involved with can be found on Facebook’s Open Source page.)

More scaling challenges to come
Facebook has been growing at an incredible pace. Its user base is increasing almost exponentially and is now close to half a billion active users, and who knows what it will be by the end of the year. The site seems to be growing with about 100 million users every six months or so.

Facebook even has a dedicated “growth team” that constantly tries to figure out how to make people use and interact with the site even more.

This rapid growth means that Facebook will keep running into various performance bottlenecks as it’s challenged by more and more page views, searches, uploaded images, status messages, and all the other ways that Facebook users interact with the site and each other.

But this is just a fact of life for a service like Facebook. Facebook’s engineers will keep iterating and coming up with new ways to scale (it’s not just about adding more servers). For example, Facebook’s photo storage system has already been completely rewritten several times as the site has grown.

So, we’ll see what the engineers at Facebook come up with next. We bet it’s something interesting. After all, they are scaling a mountain that most of us can only dream of; a site with more users than most countries. When you do that, you better get creative.

Data sources: Various presentations by Facebook engineers, as well as the always informative Facebook engineering blog.

Cloud computing by Ubuntu for Private clouds


Private clouds give you flexible power in your own IT infrastructure. With Ubuntu Enterprise Cloud, you get the benefits of cloud computing behind the security of your firewall. Deploy workloads and have them running immediately. Grow or shrink computing capacity to meet your application’s needs.

Immediacy

Provides a self-service IT capability that enables new applications to be rapidly deployed whenever needed.

Elasticity

Demand for resources is met dynamically, with computing power flexing to meet users’ needs swiftly and seamlessly.

Compatible technology

Ubuntu Enterprise Cloud offers the same Application Programming Interfaces (APIs) as Amazon EC2, so you can build your applications to run on both platforms.

Rapid deployment

Create your initial cloud infrastructure in minutes and grow it over time. The current cloud creation speed record stands at 25 minutes.

Security

Because data is kept behind the firewall on your company’s infrastructure, fewer changes to existing governance, security and audit procedures are needed.

Optimise resources

Make the most of your existing hardware and network infrastructure by building a private cloud. You get the benefits of a cloud while maximising return on existing investments.

Trust

Uses Ubuntu’s trusted, stable and lean operating system within the cloud environment.

Bellwort technologies