Monday, August 31, 2009

INSTALLING FIREFOX 3.5 IN UBUNTU 8.04 HARDY HERON

STEP1:
sudo apt-get install libstdc++5 libnotify-bin

STEP2:
Download ubuntuzilla .deb package or Visit the download page to confirm if it is the latest version.

STEP3:
Simply double click on the just downloaded deb package to install ubuntuzilla.

STEP4:
Run ubuntuzilla.py in command line by simply typing the same in Terminal.

STEP5:
When you are prompted to choose the language of choice, if you dont know the one you want, just type 14 there(which is the default en - US).



Thats it. Goto Applications > Internet > Firefox , and you will have your latest frefox 3.5 up and running, and will even prompt you with latest updates(you can disable this feature anytime

Monday, August 24, 2009

Red Hat, IBM, Novell Major Contributors to Exploding Linux Kernel Development

The Linux Foundation reports that 2.7 million lines of code have been added to the Linux kernel over the past 16 months. The open source project now has more than 11.5 million lines of code. The number of lines added, removed, and changed each day has increased 70 percent, 68 percent, and 32 percent, respectively, from the previous report in April 2008. And the number of developers contributing to each kernel release cycle, which comes every two to three months, has risen about 10 percent. More than 5,000 individual developers from nearly 500 companies have contributed to the kernel, and the greatest support continues to come from Red Hat, IBM, and Novell. Participation from the individual development community has doubled over the past three years. The increasing rate of change and jump in contributors is a sign of a "vibrant and active community, constantly causing the evolution of the kernel in response to a number of different environments it is used in," the report says. The foundation also suggests the pace of development will continue to accelerate

Wednesday, August 19, 2009

eyeOs


eyeOS is a free software web desktop following the cloud computing concept, written in mainly PHP, XML, and JavaScript. It acts as a platform for web applications written using the eyeOS Toolkit, and includes a desktop environment with 67 applications and system utilities. The eyeOS project is thought to build the free software alternative to the big Cloud Computing services, especially those which keep the data on their servers. With eyeOS the data is always kept on the local server.

http://wiki.eyeos.org

Wi-Fi via White Spaces

Wi-Fi via White Spaces

The transition from analog to digital broadcasts has opened up radio spectrum that could be used to deliver long-range, low-cost wireless Internet service using white spaces, which are empty fragments of the spectrum scattered between used frequencies. White space frequencies could be used to provide broadband Internet access in rural areas and fill in gaps in city Wi-Fi networks. For example, Microsoft Research's Ranveer Chandra says white space frequencies could be used to allow people to connect to their home network from up to a mile away. Last November, the Federal Communications Commission (FCC) ruled that companies could build devices that transmit over white spaces, but also required that those devices should not interfere with existing broadcasts. Microsoft researchers have designed a series of protocols, called White Fi, to account for the restrictions involved in using white spaces. Chandra says wireless networking has traditionally used an open spectrum with all users being equal shareholders, but in white spaces some users are primary users. Chandra says his research team recently received an experimental license from the FCC allowing them to build a prototype White Fi system on the Microsoft Research campus. The researchers will send their findings to the FCC in the hope that the data will help establish future white-space regulations. The blueprints for a computer network that uses white spaces were presented at ACM's SIGCOMM 2009 conference, which takes place August 17-21 in Barcelona, Spain.

http://www.technologyreview.com/communications/23271/?a=f

Saturday, August 15, 2009

Microsoft Team Traces Malicious Users Technology Review (08/13/09) Lemos, Robert

In a paper that will be presented at ACM SIGCOMM 2009, which takes place Aug. 17-21 in Barcelona, Spain, Microsoft researchers will demonstrate HostTracker, software that removes the anonymity from malicious Internet activity. The researchers were able to identify the machines responsible for anonymous attacks, even when the host's IP address rapidly changed. The researchers say HostTracker could lead to better defenses against online attacks and spam campaigns. For example, security firms could create a clearer picture of which Internet hosts should be blocked from sending traffic to their clients, and cybercriminals would have a more difficult time disguising their activities as legitimate communications. The researchers analyzed a month's worth of data collected from a large email service provider to attempt to determine users responsible for sending spam. Tracking the origins of a message involved reconstructing relationships between account IDs and the hosts used to connect to the email service. The researchers grouped all the IDs accessed from different hosts over a certain time period, and the HostTracker software searched through this data to resolve any conflicts. The researchers also developed a way to automatically blacklist traffic from an IP address if HostTracker determines that the host at that address has been compromised. HostTracker was able to block malicious traffic with an error rate of 5 percent, and using additional information to identify good-user behavior reduced the error rate to less than 1 percent.
http://www.technologyreview.com/computing/23224/

Monday, August 10, 2009

Replace Windows Live Messenger with Emesene



Tired of Windows Live Messenger bloat and wishing that there was a simpler and cleaner replacement that would let you use your live.com and hotmail.com accounts? Look no further, now you can have all that messenger goodness with Emesene!


Installation & Initial Startup


The nice thing about Emesene is that it is an open source messenger that works on Windows and Linux (cross-platform is always a good thing!).












One point of interest during the install is that Emesene states that it will require 51.4 MB of disk space…but this messenger is more than worth it!










Once you have finished the installation process and started Emesene for the first time, you will see the initial login window. To login you will need to enter your full live.com or hotmail.com address (i.e. user-name@live.com). You can select to have Emesene "Remember me", "Remember password", and "Auto-Login". As with other messengers, you may also select your Status before fully logging in.













Pre & Post Login Menus

Here is a quick look at the pre-login menus…not too much that you can access at the moment.









Once you do get logged in though, you will have access to the following menus.













Plugins Manager


You may access the Plugins Manager through the Options Menu. To add plugins, select the ones that you are interested in and then click "Load New Plugins". Notice that a brief description is provided for each plugin selected directly beneath the selection area (very nice!).













Preferences

The Preferences Window has three tabs to choose from. Here you can see the first one for General Preferences. You can make adjustments for File Transfers, Desktop Settings, and Connection Settings. Notice that one of the Desktop Settings will require a restart if selected.














In the Appearance Tab you can make adjustments for Icon Size, Themes, Smiles, Text Formatting & Layout in Conversations, and Color Schemes. If you need to, you also have the option to "Revert" to the original default setup (wonderful!).













In the Interface Tab, you can make adjustments regarding Tabs in the Conversation Windows, Displaying of Avatars, and the areas that will display in the Main & Conversation Windows. To make changes in the Main & Conversation Windows, click on any of the "Blue Areas"…this performs the same action as "Select" and "Deselect". This makes it extremely easy to adjust the layout and display for Emesene's Windows!













Ready To Go

Here you can see Emesene open and ready to go. Notice that in the upper right corner there is a small mail counter. This does a wonderful job of displaying the amount of new e-mails that you have and quickly adjusts to reflect any changes in that number (i.e. you have read some or all of them).












There is a very nice Right Click Menu available as well.














The Message Window has a very nice layout with a Formatting Icon Bar available. Notice that you may also add new people to the conversation (Blue Plus Sign) and control file transfers (Green Arrow Symbol) from here as well.











Conclusion

If you are looking for a very nice, uncomplicated, and "lite on system resources" replacement for Windows Live Messenger, then Emesene is definitely worth taking a close look at.

Links

Download Emesene (version 1.0.2) – SourceForge

Emesene Homepage





WolframAlpha

Goals

Wolfram|Alpha's long-term goal is to make all systematic knowledge immediately computable and accessible to everyone. We aim to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything. Our goal is to build on the achievements of science and other systematizations of knowledge to provide a single source that can be relied on by everyone for definitive answers to factual queries.

Wolfram|Alpha aims to bring expert-level knowledge and capabilities to the broadest possible range of people—spanning all professions and education levels. Our goal is to accept completely free-form input, and to serve as a knowledge engine that generates powerful results and presents them with maximum clarity.

Wolfram|Alpha is an ambitious, long-term intellectual endeavor that we intend will deliver increasing capabilities over the years and decades to come. With a world-class team and participation from top outside experts in countless fields, our goal is to create something that will stand as a major milestone of 21st century intellectual achievement.

Status

The universe of potentially computable knowledge is, however, almost endless, and in creating Wolfram|Alpha as it is today, we needed to start somewhere. Our approach so far has been to emphasize domains where computation has traditionally had a more significant role. As we have developed Wolfram|Alpha, we have in effect been systematically covering the content areas of reference libraries and handbooks. In going forward, we plan broader and deeper coverage, both of traditionally scientific, technical, economic, and otherwise quantitative knowledge, and of more everyday, popular, and cultural knowledge.

Wolfram|Alpha's ability to understand free-form input is based on algorithms that are informed by our analysis of linguistic usage in large volumes of material on the web and elsewhere. As the usage of Wolfram|Alpha grows, we will capture a whole new level of linguistic data, which will allow us to greatly enhance Wolfram|Alpha's linguistic capabilities.

Today's Wolfram|Alpha is just the beginning. We have ambitious plans, for data, for computation, for linguistics, for presentation, and more. As we go forward, we'll be discussing what we're doing on the Wolfram|Alpha Blog, and we encourage suggestions and participation, especially through the Wolfram|Alpha Community.

Less »

Future

Wolfram|Alpha, as it exists today, is just the beginning. We have both short- and long-term plans to dramatically expand all aspects of Wolfram|Alpha, broadening and deepening our data, our computation, our linguistics, our presentation, and more.

Wolfram|Alpha is built on solid foundations. And as we go forward, we see more and more that can be made computable using the basic paradigms of Wolfram|Alpha—and a faster and faster path for development as we leverage the broad capabilities already in place.

Wolfram|Alpha was made possible in part by the achievements of Mathematica and A New Kind of Science (NKS). In their different ways, both of these point to far-reaching future opportunities for Wolfram|Alpha—whether a radically new kind of programming or the systematic automation of invention and discovery.

Wolfram|Alpha is being introduced first in the form of the wolframalpha.com website. But Wolfram|Alpha is really a technology and a platform that can be used and presented in many different ways. Among short-term plans are developer APIs, professional and corporate versions, custom versions for internal data, connections with other forms of content, and deployment on emerging mobile and other platforms.

History & Background

What was needed were also two developments that have been driven by Stephen Wolfram over the course of nearly 30 years



To know more follow this blog
http://blog.wolframalpha.com/

Friday, August 7, 2009

How to Upgrade the Windows 7 RC to RTM (Final Release)

The final version of Windows 7 was released yesterday for MS Technet subscribers, but you can’t upgrade directly from a pre-release version—at least, not without a quick and easy workaround, and we’ve got you covered.

The Problem


Windows 7 checks whether or not the current version you are running is a pre-release copy, and prevents you from upgrading further. For reference purposes, this is the error you’ll see when you try and upgrade.







The Solution

The solution is to edit a file inside the Windows 7 DVD—which you’ll have to extract to the hard drive to proceed.
  • If you are using an ISO image for the installation progress, you can use the awesome 7-Zip utility to extract the ISO to a folder on the drive.
  • If you are using an actual DVD, you can simply copy all of the files from the DVD to a folder on your hard drive.

Once you’ve extract the files, browse down into the “sources” folder to find the cversion.ini file.



Once you’ve opened up the cversion.ini file, you’ll notice that the MinClient line has a value of 7233.0, and since the Windows 7 RC release is build 7100, you can understand why it’s not working.



All you need to do is change the MinClient value to something less than the current build you are using. For the RC release, you can change it to 7000.

Now you can simply launch the setup.exe file from within the folder, and do the upgrade directly from the hard drive. Once you’ve started the setup, click Install now.


Once you get to the type of installation screen, choose to Upgrade the existing install.


Once you reach the Compatibility Report screen (if it doesn’t show up at all, be happy about it), you’ll see the list of applications that probably won’t work once you upgrade. Realistically most of these apps will work just fine, but the important thing is that you’ll be able to upgrade.


Note: You could always smooth the upgrade process by removing any apps that have compatibility problems, before you do the upgrade.



At this point, the upgrade should start working, and will take a rather long time.

Important Notes

There are a few important things to keep in mind when you are upgrading to the final version:

  • The Windows 7 beta or RC releases were Ultimate edition, so you’ll only be able to upgrade to the RTM (final) if you are installing Ultimate Edition.
  • Whenever possible, you should really backup your files and do a clean install. There are less headaches this way, and you get the benefit of a nice clean profile.

Happy upgrading!


Saturday, August 1, 2009

Steps to host appliaction in cloud

To host application in cloud computing
you upload your application to Azure using the Azure portal at:
https://lx.azure.microsoft.com/Cloud/Provisioning/Default.aspx

If you're wanting to put a SQL database back end in the cloud, you'll want to check out SQL Data Services. If you want to access your on premise SQL Server from the cloud you'd have to punch in the appropriate holes into your firewall to allow access (not something I think any DBA or network admin would be fond of doing).

You might want to check out the Windows Azure whitepaper from MS. It has alot of introductory type information on the services available as part of Azure Services: http://www.microsoft.com/azure/whitepaper.mspx

I've also posted several blog articles that walk through some basic topics: http://bstineman.spaces.live.com/?_c11_BlogPart_BlogPart=blogview&_c=BlogPart&partqs=cat%3dAzure%2520Services

Cloud computing can be broken down into 3 basic categories.
SaaS - software as a service (using a hosted product such as SalesForce.com or CRM Online)
PaaS - creating an application that is then deployed into a hosted environment (Windows Azure)
IaaS - a virtualized infrastructure hosted in the cloud (EC2 and to an extent Windows Azure)
When hosting an application in Azure, it needs to be built for Windows Azure. You can't simply build it as you would a traditional on-premise application and push it up to the cloud. This doesn't prevent you from pushing custom bits into Azure as you did with an on-premise but it does limit what can be pushed. In a nutshell, it seems that if you have to run an install to get require software on the computer, you won't be able to run it on Azure. However, if you just need to load some redistributables onto the box and they can reside in the application folder, that shouldn't prohibit it running.

Course as you'd expect, there are security limitations in place to help ensure that anything that gets loaded into your deployment don't compromise the Azure Fabric and possibly damage other applications running in the Windows Azure cloud.



CherryPal

Does this whole cloud computing craze make your mental gears turn? Chances are, unless you are a big company with some good friends that would love to let you demo this new way of computing, you probably don't have the money to drop on getting a cloud system up and running. And if you're a home user, cloud computing is overkill for you.

You're not out of luck, however, if you still want to try this. There is a cloud PC that you can use without the need for a server; it's called CherryPal. It comes with a small plastic box with two USB ports, VGA-out, audio out, LAN and wireless. It has 4 GB flash memory, so you can store stuff locally as well as on the server. Along with this PC, you get 50 GB online storage for your hard drive. It sounds like a pretty good deal at $250. It will run a distribution of Linux with applications bundled with it such as OpenOffice. For all those environmentally conscious users, this computer consumes a total of 2 watts.

Disadvantage of Cloud Computing

Cloud computing disadvantages:

First I'll break the bad news about cloud computing, and then give you all the benefits of it. One of its few major issues is that it relies totally on network connections. If the network goes down, you're done using the computer until it is back up. If the network gets bogged down, then your computing will be slower.

The other major downfall is that it doesn't use a hard drive. While it is a benefit as I will discuss later, it is also a negative. Some applications or hardware might require having a hard drive attached to the computer; these might be hard to get working properly with the hard drive on a remote server.

From my experience, the last big issue is peripherals. Getting printers to work is hit or miss. The more popular printers will give you little trouble when you try to get them working properly. The little printers that aren't as common, such as label printers, can face issues with the mini PC that each user has.

In most big businesses, few people have personal printers; most printers are networked, so it's not a big issue to a majority of users. Things such as scanners use software to work with the PC, however, and if your virtual hard drive doesn't have the software, when you log onto the cloud computer at a desk, you won't be able to use the scanner until you install the software.

User Management – not so easy, if you stick with the Amazon Web Services console, you are simply down to one person logging into the system. Using shared credentials is a big no no when it comes to providing security and accountability with your project. I did not find a software or product that would allow me to make multiple administrator accounts that would work for me. There are software programs that exist, but I found most of them lacking what I was looking for leaving me with the AWS console and access to the computer being shared by multiple people and one login. If anyone has a great tool that will provide the ability to audit, report, and provide multiple account support for multiple AWS images, then I would love to hear about it.

Updates – if you want to make sure your system is updated, then I would highly suggest you make your own AMI on a fully supported operating system (regardless of flavor). One of my biggest issues was finding out that YUM on Fedora would not update on the Amazon AMI, and kept on coming up with out of mirror errors. Fedora needs to step up to the plate and provide a fully functioning mirror, or better yet you should make your own AMI for AWS that provides the support you need for what you are trying to do. There is nothing more dangerous on the internet than an Operating System that cannot be patched or software that cannot be updated. If you are using a Windows AMI, make sure that everything is installed or that you have an I386 directory, there is nothing more amusing than being prompted to provide the installation disk when your OS is in the clouds.

Socialization of the product – Cloud computing is scary for many people, because it puts everything you have in someone else’s data center. For some the concept must be killed now, for others they see it as a way to save money. Regardless of what your approach is, you have to make sure you have full executive support and buy in for what you are doing; otherwise your project will languish for weeks if not months waiting for people to grab onto the concept. You should be prepared to deal with fear, FUD, and other undesirable business ways of killing off the cloud computing project.

Failure is going to happen – make sure you snap shot your Cloud Computing system before you start tinkering with it. While backups are always a good idea, it is also an excellent idea to simply snapshot what you are doing before you make huge changes to the system. This is often easier than trying to back out a pile of changes and you have no idea which change broke what.

Never ever hit “Terminate Instance” – if you hit terminate instance, it is gone, gone for good, and you cannot get it back, pray you made a snapshot before you “turn the computer off” for the night. Otherwise you get to start from scratch on the whole thing. This will cause amusing chuckles from your co-workers, and bemused responses from your Project Manager. It might also set you back unless you made a snapshot.

Logging – make sure you have a nice way of getting logs back to the company, or that you are going to be able to process logs on the cloud computers. You will want this for compliance, and to know if someone is doing something unusual with your system. Syslog over the internet is not a great way to get logs off the cloud back to the company. You want to use a VPN to make this happen, and it is easy to set up regardless of chosen operating system.

Documentation – you want to make your own documentation on the project you are working on, do not rely on AWS documentation to help you, often the AWS documentation is difficult to follow, understand, or read. The approach to documentation is make your own, and do it in such a way that anyone can follow what you did, how you did it, and what they need to duplicate your processes. Like any good system design or development, cloud computing systems should be documented just like any other internal project.


Make sure you have a good ROI
– make sure that if you go into the cloud that there are obvious measurable cost savings in the offering. Some setups make no sense economically, while for some companies they do. Make sure you do not overspend on your project.

Do not underestimate the amount of time required – one of the biggest shortfalls of the project was incorrectly allocating time, or grossly under estimating the amount of time it would take to complete the project. In most projects I am usually dead on with time estimates, and found that I was under estimating time to completion for a first cloud computing project by upwards of 4X. If I thought it would take 30 minutes, it often took 2 hours to complete a task. Most of this is attributable to new technology, new ways of doing things, and general training ramp up time. While not a barrier to use, it was a barrier to completing the project on time.


URL’s and IP Addresses
– AWS will give you a temp URL to work with when you are prototyping your system, if you use that then assign an IP address later, then a Domain Name after that, and you did not program the base URL as a configurable item, then you will have to go back and change that. The temp URL with AWS does not work after you assign an IP address. It might be easier to have already set up and IP address and DNS entry before you let the developers at the system.

Open Source Software is the best way to go here – mostly because you do not have access to a DVD drive on the cloud, and software that wants a DVD to verify you own it, or needs a DVD to load is an oxymoron in the cloud. Otherwise, make your own AMI, and make sure that what you are building is able to work without a DVD drive, or disks, or other items we take for granted in the data center where we can touch physical items.

Those are the big key issues and some interesting differences between being in the data center and working through the cloud. These are my own experiences working with Amazon Web Services over the last 60 days. Overall though I would say that the experience with AWS has been positive, but given the ramp up time, learning curve, and support issues, this project actually lasted longer by 75% than it was initially estimated to last. Most of that is again, attributable to learning curve, the promise of cloud computing for rapidly deploying a system should hold true for any other deployments that we will do in the cloud. But for the first big project, make sure you either already have trained folks in Cloud Computing, or that your project can go over time without detriment.

Make sure you are ready to be audited as well, that includes log management, user accounts, and patching/updating. While I highly recommend you make your own tested AMI, some people will not do that and whatever AMI exists already for the project they are doing. You will want to make a careful risk evaluation of that stance depending on what you are using your cloud computing system for. If you fall under SOX, HIPAA, or anything else, you want to make sure you stay in compliance.