Wednesday 30 January 2013

Why the Internet works and how to break it





If the internet was a person, it would be beginning to feel its age this year as it gets into its 30s, with a mid-life crisis looming. As it happens, the internet has never looked better: it's faster, bigger, better and richer than it was in its 20s.
But there are people having a midlife crisis on the internet's behalf. Governments want to change how it is governed, how it works and, most disturbingly, its openness. So, it is worth taking a moment to outline the first principles of the internet that have made it successful, why they are worth preserving and what we can expect if they are preserved.

Bob Kahn and I began work on the design that became the internet in the 1970s, motivated by the spectacular success of the Arpanet project, funded by the US Defense Department , in which small computers sent data 'packets' across dedicated telephone circuits. It was a homogeneous network connecting inhomogeneous computers: different operating systems, different word sizes, different computational capacities.

We met in 1973 at Stanford and started working on a design to allow up to 256 networks to be connected in such a way that the host computers would not need to know anything about the layout of this super-network . At the same time, every host computer would be able to talk to every other one despite their different operating systems and other differences. We also worked on a detailed design of the Transmission Control Protocol (TCP) and began implementing and testing it in 1975.

We were sure that this was powerful . The packets we were using to transport data were remarkably adaptable : they could be transmitted over any digital communication channel , bringing with them any information that could be digitised. The network was not designed for a particular application and this has allowed it to support applications that weren't predicted in the early formulation of the internet's design.

We didn't , for instance, anticipate the hand-held mobile. We did anticipate an 'Internet of Things' — more on that in a moment — and personal computing. We even foresaw notebook computing, whereby a computer that isn't powerful can perform tricky tasks by drawing on the internet. We didn't have to imagine it. Alan Kay had shown a notebook-computer concept around 1968 he called FLEX and Xerox Parc built the Alto personal computer along with the Ethernet in the early 1970s. They were living in a world that others would not experience for 20 years.

The system Bob
 and I designed, alongside collaborators from Europe and Asia who visited our lab in the mid-1970 s, has since grown by factors of a million or more on all dimensions : a many million times more users , a million times more hosts, a million times more networks, all connected a million times faster.

But the numbers aren't the only difference . The internet era is different from the telephone era for at least two reasons: it allows groups to communicate , coordinate, collaborate and share information, and it supports every medium of communication invented, all in one network. People can discover each other without knowing who they are and they can find groups with common interests.

The institutions spawned by the internet and which regulate and build the internet are similarly meritocratic and diverse. The Internet Society , Internet Architecture Board, Internet Engineering and Research Working Groups, Internet Governance Forum and Internet Corporation for Assigned Names and Numbers : all of them are run by many 
stakeholders who together decide policies and standardisation. It is a meritocracy that respects ideas more than institutions. It values openness and sharing of information, freedom of choice and expression.

Of course, the internet can be abused and people harmed from that abuse . Protection of personal information should be a high priority for all internet application providers. We also need to educate people about what can happen when they share information on the internet: once it is available to anyone, it is possible for someone to upload to other sites or to capture and store the information . Any country that gets the internet soon finds out that some harm comes from people who are in other national jurisdictions. We will need to find ways for international cooperation to deal with abuse.

But as we do figure out better ways to make cyberspace safe to use, we must preserve the very properties that have made it so successful: freedom of expression, transparency and openness, participatory policy and technology development.

Tuesday 29 January 2013

Tip to become a successful software engineer.




 This post is a follow up to Derick’s great post. I could not agree with his view point any more., but it struck a chord with me.  There is more to it. To actually call yourself a software engineer you need to take into account a few aspects of what an engineer should do.

You’re Not Paid To Type
Typing code into a code editor or text editor is not what a Software Engineer is paid to do.  At least, it is not the primary reason this profession exists.  Yes, part of the job is to write code in any number of languages and platforms. As Derick pointed out, it is more then writing code, it is about writing tests, and making sure the code you do type works as designed and can be easily maintained.
All that being said, the actual act of typing is simple and quick.  There is training in keyboard typing and methods to increase how many words per minute one can type. So, does typing more code constructs per minute mean you should to get paid more money?  If you turn out more code then the engineer sitting next to you, have you created more value?  See where I am going with this.  Typing is easy, and typing the wrong code is really easy.  I have seen organizations that are fearful of missing deadlines and dates. Its so unhealthy that the developers think they need to start writing code NOW, but they don’t really know what they are supposed to be creating. They do know what to create in the general sense, but they rush intp writing software without knowing most of the details.

You are paid to THINK, so start doing that
So, my main point of this post is that Software Engineers are paid to Think.  You are paid to think about what is the correct code to create, how is should be constructed to lower the total cost of ownership.
If you only change one thing about the way you work this year try this.
  If you normally get your requirements verbally, trying writing them down.
Write down your requirements or technical plan in the easiest manner possible. That could be on a whiteboard, you could annotate a screenshot of an existing screen, you could use pencil and draw the changes to a print out of a screen shot.  Just do something in terms of thinking about what needs to be done before you start typing.  If you do write down what you plan to do, you can actually communicate it to other developers. You can have someone else review it and think through the problem.  You can also show it to the person who will decide if you created the correct software, imagine getting some feedback on what you want to build before you mess it up?
The two most valuable ways I have found to write down what needs to be created are Screen Mockups and Sequence Diagrams. Now, I have been in the web space for a long time, so if you are not creating websites, or web applications, you may find that there are better ways to write down what you need for your particular design problem.  Either way , try to write it down. If you are writing mockups today, then add a sequence diagram for the more complicated problems and see if it helps.  I know it helps me and the developers I work with.



Posted by Eric Hexter 

Monday 21 January 2013

Facebook‘s annual hacker competition opens for registration



 Social networking giant Facebook has opened registration for its third annual Hacker Cup, set to begin from January 25.

The top prize will be $ 10,000 (approx. Rs 5.5 lakh) as against $ 5,000 (Rs 2.75 lakh) last year and as many 25 of the best hackers will be taken to Facebook's headquarter in USA, a statement said.

The Facebook Hacker Cup is an annual worldwide programming competition where hackers compete against each other for fame, fortune, glory and a shot at the coveted Hacker Cup.

"The competition will be held in two rounds starting with an online qualification, out of which 25 of the best hackers are then flown by Facebook to their headquarters inMenlo Park, CA," Facebook said.

"The preliminary round will be held between January 25 and February 16 while the onsite final round is scheduled for March 22-23, 2013.

Contestants will be judged on accuracy and speed as they race to solve algorithmic problems to advance through up to five rounds of programming challenges," it added.

Last year, the Hacker Cup attracted 8,000 participants from 150 countries, with the winner- Roman Andreev, hailing from Russia, Facebook said.

"Hacking is core to how we build at Facebook. Whether we're building a prototype for a major product like Timeline at a Hackathon, creating a smarter search algorithm, or tearing down walls at our new headquarters, we're always hacking to find better ways to solve problems," Facebook said in its Hacker's Cup page.

Facebook said that those who registered for a previous year are automatically registered for the competition year, however, they still need to check their information is up-to-date.
http://da.feedsportal.com/r/151884572063/u/53/f/534057/c/33041/s/2795eb74/kg/342/a2.imghttp://pi.feedsportal.com/r/151884572063/u/53/f/534057/c/33041/s/2795eb74/kg/342/a2t.img

Friday 18 January 2013

Nokia to transfer 820 jobs to TCS, HCL Tech



Finnish mobile phone maker Nokiasaid it will cut over 1000 IT jobs, including 820 employees who will be transfered to HCL Technologies and Tata Consultancy Services, as part of an already-announced restructuring.
It said 300 jobs will be cut altogether, and that most of the reductions would be in Finland.
HCL Tech has recently entered into a long-term, global IT infrastructure management outsourcing services agreement with Nokia. The scope of this engagement includes datacenter, network management, end-user computing services and cross-functional service management across Nokia's global IT infrastructure operations. As part of this engagement, HCL will be deploying its MTaaS and MyCloud solutions. HCL has also been delivering global service desk and desktop management outsourcing services for Nokia since 2009.
TCS has been operating in Finland for about 10 years, servicing clients such as Nokia Siemens, ABB and Telenor.

The job cuts are part of Nokia's plans to cut 10,000 jobs, including 3,700 in Finland.

Nokia will offer employees affected by these planned reductions both financial support and a comprehensive Bridge support programme. These are the last anticipated reductions as part of Nokia's focused strategy announcement of June 2012.

The majority of the employees affected by these planned changes are based in Finland. Nokia is beginning the process of engaging with employee representatives on these plans in accordance with country-specific legal requirements

.
http://www.techgig.com/tech-news/editors-pick/Nokia-to-transfer-820-jobs-to-TCS-HCL-Tech-16743?mailer_id=1412&utm_source=Mailer&utm_medium=TG_batch&utm_campaign=digest_news_2013-01-18&email=rmuthukumarece89@gmail.com&activity_name=tgdailynews_2013-01-18&dt=&auto_login=cm11dGh1a3VtYXJlY2U4OUBnbWFpbC5jb21AIyRAIyQzNjM3OTJAIyRAIyQxMzU4NDU1NzMw&src_type=autoLogin

Wednesday 16 January 2013

Open-access activist and internet hero Aaron Swartz dies




Internet activist and programming star Aaron Swartz has died, his family has confirmed, committing suicide in New York while facing a potential $1m in fines and up to 35 years in prison over federal charges around computer hacking. Swartz died on Friday at the age of 26, his uncle and his legal team independently confirmed to MIT’s The Tech.
The programmer was integral in creating RSS, and created a company that later merged with popular internet destination Reddit. However, more recently he was investigated for hacking JSTOR, the subscription-based journal service, and extracting its database with the intention for public release.
Swartz was a vocal open-access campaigner, and had form in turning to hacking when demands for public data went unheard. In 2008, he wrote software to extract and collate information from the Pacer directory of federal judicial documents, the NYT reports, in protest of the service’s $0.10-per-page fee for retrieval. Swartz’s app snagged around 20m pages using free library accounts.
The government opted not to press charges, but Swartz was less lucky after breaking into JSTOR. Then, he physically breached security, installed a laptop running custom software, and pulled 4.8m documents from the database. Although JSTOR did not pursue the hacktivist itself, US attorney Carmen M. Ortiz didn’t feel so accommodating, and Swartz was indicted back in July 2011.
For more on Swartz – and the impact his work on free-data, and the world he leaves behind – we’d recommend Lawrence Lessig’s piece “Prosecutor as Bully.” BoingBoing’s Cory Doctorow also has a must-read tribute to Swartz, including information on the organization, DemandProgress, Swartz helped establish. Finally, Swartz’s partner, Quinn Norton, has a piece that’s well worth reading.

Open-access activist and internet hero Aaron Swartz dies is written by Chris Davies & originally posted on SlashGear. 

Source:http://www.techgig.com/tech-news/editors-pick/Open-access-activist-and-internet-hero-Aaron-Swartz-dies-16700

Monday 7 January 2013

New Facebook app to allow free voice calls to friends



 Facebook is preparing to launch a new feature for its Messenger app which allows users of the social networking site to place free voice calls tofriends.

The feature is so far available only to smartphone users in Canada and is buried within the latest update to the app, but it will eventually allow users to make free internet voice calls, known as VoIP calls, to any Facebook friend.

Experts are saying it represents an attempt by the world's largest social network to dominate the social world by taking on the default calling function in mobile phones, the 'Daily Mail' reported.

The new feature comes at the same time as Facebook Messenger rolled out a new feature worldwide which allows users to record and send a voicemail-type message to friends.

Working in a similar way to video messaging in the company's Poke app, users press and hold a red record button, speak their message, and it appears in line as part of the conversation.

TechCrunch writer Josh Constine imagines a range of uses for the function, from messaging while driving to recording the waves lapping at a beach to send to friends.

However, its addition to the Messenger app seems merely to make it an 'even more complete app' he writes, adding that he expects video messaging to soon be added as well.

One-tenth the size of the US, but with very similar demographics and mobile usage trends, Facebook is using Canada as a testing ground in advance of rolling out the feature in other markets, the paper said.

By clicking the 'i' icon in the top right of a conversation in the most recent update to Messenger, users reveal a 'free call' button which allows them to contact any friend also within the test region.

However, while Facebook is not charging users for the service, the call is not technically free since it will use data on users' existing mobile plans.

TechCrunch said that the move into voice messaging and VoIP can be seen as an attempt by the social network to take on the default, mobile network operated calling function on smartphones.
http://da.feedsportal.com/r/151884098951/u/53/f/534057/c/33041/s/273c5f1f/kg/342/a2.imghttp://pi.feedsportal.com/r/151884098951/u/53/f/534057/c/33041/s/273c5f1f/kg/342/a2t.img

Internet emits 830 million tonnes of carbon dioxide


 Internet and other components of information communication and technology (ICT) industry annually produces more than 830 million tonnes of carbon dioxide (CO2), the main greenhouse gas, and is expected to double by 2020, a new study has found.

Researchers from the Centre for Energy-Efficient Telecommunications (CEET) and Bell Labs explain that the information communications and technology (ICT) industry, which delivers Internet, video, voice and other cloud services, produces about 2 per cent of globalCO2 emissions -- the same proportion as the aviation industry produces.

In the report published in journal Environmental Science & Technology, researchers said their projections suggest that ICT sector's share in greenhouse gas emission is expected to double by 2020.

They have also found new models of emissions and energy consumption that could help reduce their carbon footprint.

The study said that controlling those emissions requires more accurate but still feasible models, which take into account the data traffic, energy use and CO2 production in networks and other elements of the ICT industry.

Existing assessment models are inaccurate, so they set out to develop new approaches that better account for variations in equipment and other factors in the ICT industry.

They describe development and testing of two new models that better estimate the energy consumption and CO2 emissions of Internet and telecommunications services.

The researchers suggest, based on their models, that more efficient power usage of facilities, more efficient use of energy-efficient equipment and renewable energy sources are three keys to reducing ICT emissions of CO2.http://pi.feedsportal.com/r/151884092762/u/53/f/534057/c/33041/s/273c0ca5/kg/340-342/a2t.img

Friday 4 January 2013

10 enemies of being a good programmer



FOI...

This article discusses the habits which would need to be avoided if a person wants to become a good programmer.

Introduction and background

The information technology is not a new term now. For a common person, this is the field which guarantees good money and life (in the context of INDIA). People working in this industry are looked up-to and generally considered intelligent than others. 
After working for quite some time in this industry, looks like the above points are coming very close to being a myth.

Definitely this industry has been life changer for many and also big employer, foreign currency grabber but by large it would be interesting to know what’s the reality at ground level.

Many would comment that the quote about intelligence is overrated and this is just another industry with revenues in dollars helping for mammoth turnovers.

The common minimum ingredient of this industry is a programmer (or a developer or coder) who is the one writing the programs which are supposed to make things happen. As in every field of life, there are good programmer and bad programmers. The other hot topic in this industry is whether this industry possesses chunk of good programmer or not. Many wouldn’t agree.

So, what makes a good programmer? This is debatable point and to make things simpler, let’s see this the other way round i.e. discuss the top 10 enemies which can prevent oneself from becoming a good programmer.

Objective

To know about the DONT's if anyone wants to become a good programmer.

Description
Programming world is a confusing world. There have been so many languages, technologies, platforms, infrastructures to choose from which doesn't make life of a programmer easy even though they are supposed to. In the scenario where business needs are outscoring the engineering practices, the programming has become a real complex thing increasingly challenged by time and budget. There has been lot of research into software metric to measure the performance and quality but it still not straightforward to categorize programmers and probably say who is good or bad. Following are list of points which if we can ensure while programming, would definitely result in better code quality, better planning and better professional life. These points are picked up from real incidences which always keep happening with considerable frequency and it is observed that such incidences leads to poor quality, performance and higher costs. These points are not just related to technical skill but rather the attitude, awareness, behavior.

1. Is it computer or my program: When something goes wrong there is a tendency to term that it must be something wrong with computers or something else other than me? It’s funny but it omnipresent statement one hears. Barring few cases, it’s always the program doing something wrong rather than computer. If we take environmental, infrastructural attributes into account while programming we won't end of saying this. This is the biggest enemy.

2. Its working on my PC: This is probably mostly used age old statement and many have encountered it umpteen times. It is hard to believe but it happens all the time that the programs work well on developer’s machine but fail after deployment... Do we program for our own pc or our programs are intended to run only on the developer’s machine? Surely not and this is caused by insufficient programming skills and not enough knowledge of working environments, necessary settings etc.

In most of such cases, mostly the programmer has forgotten to update the deployed environment with some settings/ configurations, some component missing etc. rather than anything else.

3. Finger pointing: I have just changed it but I didn’t cause this error, it must be somebody else who might have worked on the same stuff. This is a statement one would hear all the time and it’s generally the first reaction one would hear when asked about some error or fault. In reality, something is changed and the original functionality is also lost then these statements are made. This is quite intriguing that something is changed and before a change things were working and still the programmers say this. This depicts the lack of ownership as well as understanding and escapist attitude. Given the complexity and difficulty to find the facts, many are encourage saying this.

There is a small remedy to overcome this and which is the practice of performing unit testing and take my word, life would be easy.

4. Skipping the validation of user input: While programming for user interactive applications / programs, one tends to skip the validation of user inputs under the assumption that he has coded for it and it shall work. The minimum check which is like confirming the data entered in GUI doesn’t exceed the data type and lengths defined in the database. No wonder, many of the defects and also the program crashes arises out of lack of such validation.
It is better to always ensure that the data types and lengths on GUI matches with the ones defined in data source and also it handles the common pitfalls which can spell doom for hackers. 

5. Not estimating (or planning) the work or tasks: Most of the programmer (especially junior ones) has urged to jump into coding without estimating the work. Estimation is just not important from planning perspective but it gives an opportunity to revisit the scope as the estimates are bound to scope.

There might couple of tasks challenged by time and/or budget which can be performed well without estimates but not     all. Whatever may the work or tasks if it is estimated, then it ease out planning, controlling and monitoring and also         given an avenue for asking the help before it is too late?

6.   Swallowing the exceptions: Exception handling is still a mystery for many programmers and when he/ she don’t know how to handle the exception or there is lack of good exception handling mechanism, the exceptions are swallowed means not action is taken after the occurrence of such exceptions or errors.
     Today's programming languages offers far more sophisticated mechanism in the form of try-catch-finally. This           swallowing mostly results into application crashes putting programmers into awkward position.
7.   Blindly copying and pasting: In today’s internet era, everything is available on internet and code snippets are not exceptions to this. With super-efficient searches, it is possible to locate code snippet which can do one's job.  
   
Probably it can't be called wrong to copy such code snippets and use but unfortunately the usage is always coincided      by blindly copying or retrofitting which results into partial solutions not taking into account the existing scenario. Also      one has to remember that such code snippets needs more rigorous testing than your native code to make sure that it      is doing what is needed. Another example is copying from code samples or others code which generally skips the understanding curve resulting into inferior quality software and it is also difficult to change such program and   maintain too. So copying and pasting is unavoidable but it would be better to invest some time to understand what it is doing and what is expected to be done and perform this with sound judgment.
8. Not being latest: The technology paradigm is characterized by the continuous changes and improvements. While working hard and long, many programmers are not able to synch them up with such changes and improvements. This may result in inferior solutions or far more time investment than could be justified. Also there are many changes happening on software engineering front where new methodologies like Agile, XP are introduced which can address many pertinent challenges in development life cycle.

9.  Lack of Documentation/ comments/standardization: No denying the fact the programs are written to be executed by computers but they are also written which could be understood by others. Many programmers shy away from writing enough comments or documentation which can help others about why, how and when something is done. Essentially this makes other programmer not attempting to understand and write something which would add to confusion. There are many instances of coding horrors, coding mess, coding chaos which have one thing in common and i.e. lack of documentation and standardization. Essentially the programs shall be human readable.

10. Speed well than accuracy: This is a trap for even the proven programmers. Many programmers think that the programming is 100 ctr races and one has to be lightning fast to win this race and in the process the quality, accuracy is sacrificed. This catapults into more work (many would want this for continuity of their business) which is not good of one looks at cost and time. Accuracy could attain more priority and speed with accuracy could be the motto.  

Summary and Conclusion
This topic is exhaustive and complex to discussed and addressed on one write-up. Still these 10 point are put forth to touch base the perennial challenges in software industry. There may be more apt points and detailed discussions but hope that this write-up could help to bring some really interesting practices to the fore.

Source:http://www.dotnetfunda.com/articles/article2098-10-enemies-of-being-a-good-programmer.aspx

How technology powers supply chain





Supply chain is about managing a network of several interconnected businesses that are involved in the process of delivering the product to the end customer. Throughout this process, flow of information from one stage to another plays a crucial role as it ensures effective decision-making at the planning and execution stages.

When it comes to the flow of information, no other tool or resource can do it as quickly and accurately as technology. This is true when it comes to supply chain as well.

Today, technology in supply chain is much more than just computers. It includes varied aspects, right from factory automation, enhanced communication devices, data recognition equipments to other types of automated hardware and services.

Companies have also started using technologies like advanced versions of speech recognition, digital imaging , radio frequency identification (RFID), real-time location systems (RTLS), bar coding, GPScommunication , Enterprise Resource Planning (ERP), Electronic Data Interchange (EDI), etc to improve their processes.

These IT-enabled infrastructure capabilities not only help organisations achieve higher efficiency, but also reduce cycle time, ensure delivery of goods and services in timely manner and improve overall supply chain agility.

Use of technology in three arenas
Organisations use technology in three broad areas namely transaction processing, supply chain planning and collaboration, and order tracking and delivery coordination. In transaction processing, companies employ technology to increase efficiency of information exchanged regularly between various supply chain partners. Typically, in this area, technology enables easy order processing, sending dispatch advice, tracking delivery status , billing, generating order quotes, etc.

Technology also helps in supply chain planning and collaboration, thereby improving the overall effectiveness of the process. Here, technology is used to share planning-related information like customer feedback, demand forecasting, inventory level, production capacity and other data. This helps in managing waste and inconsistency arising out of unpredictable and logistically demanding markets. According to Mr Prashant Potnis, GM - IT and Systems at Spykar Lifestyle Pvt Ltd, "Statistical capabilities enabled by technology like importing historical sales data, creating statistical forecasts, importing customer forecasts, collaborating with customers, managing and building forecasts, etc. bring accuracy to a company's demand plans."

Lastly, technology is also useful for order tracking and delivery coordination, as it monitors and coordinates individual shipments, ensuring delivery of the product to the consumer without errors.

Employing technology at all these levels no doubt comes at a cost. However, apart from reducing physical work, technology improves the quality of information, expedites information transfer, and increases and smoothly manages the volume of transactions. Yogesh Shroff, Finance & Supply Chain Director at Nivea India Pvt Ltd, agrees. "A combination of process changes and use of advanced technology can help companies gain better returns on marketing and sales investments, reduce cost, strengthen relationships across the value chain and retain customers," he says.

While most companies have employed technology to manage their supply chain today, they have realised that conventional methods have to be pushed beyond their peripheries to sustain in highly competitive environments and fields. If applied correctly, technology holds the potential to turn supply chain into a major differentiating factor for any company.

Thursday 3 January 2013

Calling javascript function from CodeBehind with C#


Javascript works from the client side while server code requires server processing. 
It is not possible to direcly invoke a client event through a server event. However there exists a fairly simple methods to achieve this. I will explain one simple way to do it.

Injecting a javascript code into a label control

This is one of the simplest method to call a javascript function from code behind. You need to do the following:

1.     Create a new website project
2.     Add a label and button control on your Default.aspx page
3.     You body part of the markup should now look something like this: 

<body> 
    <form id="form1" runat="server"> 
    <div> 
        <asp:Label ID="lblJavaScript" runat="server" Text=""></asp:Label> 
        <asp:Button ID="btnShowDialogue" runat="server" Text="Show Dialogue" /> 
    </div> 
    </form> 
</body> 

4.     Add a javascript function to show a message box as shown below: 
<head runat="server"> 
    <title>Calling javascript function from code behind example</title> 
        <script type="text/javascript"> 
            function showDialogue() { 
                alert("this dialogue has been invoked through codebehind."); 
            } 
        </script> 
</head>
5.     Now double-click on the button and add the following code: 
lblJavaScript.Text = "<script type='text/javascript'>showDialogue();</script>";

6.     Run your project and click the Show Dialogue button

A message box will be displayed as shown below: 
screen shot of client side message box

3-Tier Architecture in ASP.NET with C# tutorial


   
In 3 tier programming architecture, the application development is broken into three main parts; these are: Presentation layer(PL), Business Acess Layer(BAL) and Data Access Layer(DAL). These separations ensure independent development of those layers such that a change in one does not affect the other layers. For instance a change in the logic of the DAL layer does not affect the the BAL nor the presentation layer. It also makes testing easier in that a whole layer can be replaced with a stub object.

 For example instead of testing the whole application with an actual connection to database, the DAL can be replaced with a stub DAL for testing purposes. The DAL deals with actual database operations. These operations are inserting, updating, deleting and viewing. The BAL deals with the business logic for a particular operation. The PL has the user controls for interacting with the application.

3-Tier programming architecture also enables re-use of code and provides a better way for multiple developers to work on the same project simultaneously.
  1. Start a new website project
  2. Design you page as shown below,
    3 tier form design
  3. Add sub-folders and class objects within the App_code folder as shown below,
    3 tier classes
  4. Code for the add button is shown below,
    protected void cmdAdd_Click(object sender, EventArgs e)
    {
        CustomerBAL cBal = new CustomerBAL();
        try
        {
            if (cBal.Insert(txtCustomerID.Text, txtFirstName.Text, txtLastName.Text) > 0)
            {
                lblMessageLine.Text = "Record inserted successfully.";
            }
            else
            {
                lblMessageLine.Text = "Record not inserted.";
            }
        }
        catch (Exception ex)
        {
            lblMessageLine.Text = ex.Message;
        }
        finally
        {
            cBal = null;
        }
  5. code for CustomerBAL.cs
    using System;
    using System.Collections.Generic;
    using System.Web;
    ///
    /// Summary description for CustomerBAL
    ///

    public class CustomerBAL
    {
        public CustomerBAL()
        {
            //
            // TODO: Add constructor logic here
            //
        }
        public int Insert(string CustomerID, string FirstName, string LastName)
        {
            CustomerDAL cDal=new CustomerDAL();
            try
            {
                return cDal.Insert(CustomerID, FirstName, LastName);
            }
            catch
            {
                throw;
            }
            finally
            {
                cDal = null;
            }
        }
  6. Code for CustomerDAL.cs
    using System;
    using System.Collections.Generic;
    using System.Web;
    using System.Data.SqlClient;
    using System.Data;

    ///
    /// Summary description for CustomerDAL
    ///

    public class CustomerDAL
    {
        public CustomerDAL()
        {
            //
            // TODO: Add constructor logic here
            //
        }
        public int Insert(string CustomerID, string FirstName, string LastName)    public int Insert(string CustomerID, string FirstName, string LastName)
        {
            //declare SqlConnection and initialize it to the settings in the section of the web.config
            SqlConnection Conn = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["dbConnectionString"].ConnectionString);
            //===============================
            //prepare the sql string
            string strSql = "insert into t_Customers(CustomerID,FirstName,LastName) ";
            strSql = strSql + "values(@CustomerID,@FirstName,@LastName)";

            //declare sql command and initalize it
            SqlCommand Command = new SqlCommand(strSql, Conn);

            //set the command type
            Command.CommandType = CommandType.Text;

            try
            {
                //define the command parameters
                Command.Parameters.Add(new SqlParameter("@CustomerID", SqlDbType.VarChar));
                Command.Parameters["@CustomerID"].Direction = ParameterDirection.Input;
                Command.Parameters["@CustomerID"].Size = 20;
                Command.Parameters["@CustomerID"].Value = CustomerID;

                Command.Parameters.Add(new SqlParameter("@FirstName", SqlDbType.VarChar));
                Command.Parameters["@FirstName"].Direction = ParameterDirection.Input;
                Command.Parameters["@FirstName"].Size = 25;
                Command.Parameters["@FirstName"].Value = FirstName;

                Command.Parameters.Add(new SqlParameter("@LastName", SqlDbType.VarChar));
                Command.Parameters["@LastName"].Direction = ParameterDirection.Input;
                Command.Parameters["@LastName"].Size = 25;
                Command.Parameters["@LastName"].Value = LastName;

                //open the database connection
                Conn.Open();
                //execute the command
                return Command.ExecuteNonQuery();
            }
            catch
            {
                throw;
            }
            finally
            {
                Command.Dispose();
                Conn.Dispose();
            }
        }
    }
  7. Set the connection string on the web.config file
  8. Run the project

Apple iPad mini‘s ‘inferior‘ screen made by Samsung




Apple's iPad mini appears to include an LCD display driver from South Korea's Samsung Electronics, a key supplier but also the Silicon Valley tech giant's fiercest rival in a global mobile-device war.

The iPad mini, to be available in stores on Friday, includes Apple's A5 processor, SK Hynix flash memory and a number of chips from Fairchild Semiconductor International, according to electronics repair company iFixit, which acquired one early and opened it on Thursday.

Apple and Samsung are engaged in patent disputes across 10 countries as they vie for market share in the booming mobile industry, and Apple is believed to be seeking ways to rely less on Samsung. But Samsung remains a key supplier for Apple, manufacturing its application processors and providing other components.

The 7.9-inch iPad mini marks the iPhone-maker's first foray into the smaller-tablet segment. Apple hopes to beat back incursions onto its home turf of consumer electronics hardware, while safeguarding its lead in a larger tablet space - one that even deep-pocketed rivals like Samsung have found tough to penetrate.

How cloud computing can hurt jobs



Cloud computing, regarded as a boon for enterprises, could rain on the party of distributors and resellers who are now an integral part of the technology sales ecosystem. In developed economies, the community of resellers is already shrinking as software and hardware companies engage directly with enterprises. In India, where cloud adoption is just about gaining pace, the drizzle has begun.

"Given the nature of the cloud business, we see that it could be a threat. But the adoption rate is very slow and, therefore, we have a few years before we see signs of trouble," said Ratnesh Rathi, secretary at Computer & Media Dealers Association, whose 350-odd members do business in and around Pune.

Across India, some Rs 61,500 crore worth of hardware and Rs 18,000 crore worth of software products are sold every year, according to industry body Nasscom. And about 23,000 distributors, resellers and retailers - often called channel partners - are responsible for about four-fifths of total sales.

As enterprises move to the cloud for their computing requirements, both hardware and software are delivered over the internet rather than as physical assets that reside on customers' premises. This calls into question the role of channel partners.

Change is on the way, and it could be taking place sooner than many anticipate. A recent study by the Society for Information Management captured early signs of this change. Enterprises used to spend about 32% of their IT budget on in-house hardware in 2011, but that is down to 24% in 2012.

Gartner estimates that $326 million (Rs 1,760 crore) worth of cloud computing services were consumed in India, including software and hardware. Less than one in ten enterprises have adopted cloud computing at present, but the market is expected to grow at an average of 50% every year till 2015.

"Channel partners are facing the ripple effect of the changing times. As clients move to an operational expenditure model, vendors are asking suppliers to learn and change," said KR Chaube, director at Trade Association of Information Technology, or Tait, a group that represents the interests of distributors and resellers in and around Mumbai. "There is a lack of clarity about the cloud among channel partners."

Some channel partners are beginning to adapt to the changing environment. Among them are Bangalore-based Value Point Systems, a Hewlett-Packard partner for nearly two decades, and Kolkata-based Supertron Electronics that distributes Acer and Dell products. By investing in data centres and offering storage as a service, they are looking to evolve from mere box-pushers to value-added technology solution providers.

Ingram Micro, one of the biggest distributors with revenues of more than Rs 10,000 crore, announced partnerships with Microsoft, Salesforce, Netmagic, Zoho and Ramco to offer cloud-based solutions in both software and hardware.

But those such as Ingram and Value Point are exceptions to the rule. Few channel partners are aware of what they need to do to survive and fewer still are capable and equipped to do it. Vendors such as Microsoft and Dell try to help channel partners but ignorance and inertia stand in the way.

While channel partners in developed markets like Australia and Japan already use cloud-based solutions, Indian partners are less willing to change because their business is yet to be impacted in a significant way, according to a senior executive at Oracle.

"They are still getting business within their specialisation as a hardware-only partner or a network-only partner and this is making them unwilling to change," said Stuart Long, chief technology officer and senior sales consulting director for the Systems Division in Oracle's Asia Pacific and Japan region.

Senior analyst at Forrester Research Tirthankar Sen said that unless partners start building on the "cloud-quotient," they rule out the possibility of being able to growth with the cloud.

According to market researcher AMI Partners, only about 35% of channel partners are offering cloud-based solutions. The rest are exposed to the risk of becoming irrelevant as the cloud grows in size.

"At the end of the day not everybody may make it." said Microsoft India managing director Sanket Akrekar.

Tracing Microsoft‘s journey from text to touch


With recent release of the touch-centric Windows 8 software, Microsoft continues more than three decades of making operating systems for personal computers.

Microsoft got its start on PCs in 1981 through a partnership with IBM. Microsoft made the software that ran IBM's hardware, and later machines made by other manufacturers. That first operating system was called MS-DOS and required people to type instructions to complete tasks such as running programs and deleting files.

It wasn't until 1985 that Microsoft released its first graphical user interface, which allowed people to perform tasks by moving a mouse and clicking on icons on the screen. Microsoft called the operating system Windows.

Windows 1.0 came out in November 1985, nearly two years after Apple began selling its first Macintosh computer, which also used a graphical operating system. Apple sued Microsoft in 1988 for copyright infringement, claiming that Microsoft copied the "look and feel" of its operating system. Apple lost.

Microsoft followed it with Windows 2.0 in December 1987, 3.0 in May 1990 and 3.1 in April 1992.

In July 1993, Microsoft released Windows NT, a more robust operating system built from scratch. It was meant as a complement to Windows 3.1 and allowed higher-end machines to perform more complex tasks, particularly for engineering and scientific programs that dealt with large numbers.

Microsoft had its first big Windows launch with the release of Windows 95 in August 1995. The company placed special sections in newspapers, ran television ads with the Rolling Stones song "Start Me Up" and paid to have the Empire State Building lit up in Windows colors.

Comedian Jay Leno joined co-founder Bill Gates on stage at a launch event at the company's headquarters in Redmond, Wash.

"Windows 95 is so easy, even a talk-show host can figure it out," Gates joked.

The hype worked: Computer users lined up to be the first to buy it. Microsoft sold millions of copies within the first few weeks. Windows 95 brought built-in Internet support and "plug and play" tools to make it easier to install software and attach hardware. Windows 95 was far better - and more successful - than its predecessor and narrowed the ease-of-use gap between Windows and Mac computers.

At around the same time, Microsoft released the first version of its Internet Explorer browser. It went on to tie IE and Windows functions so tightly that many people simply used the browser over the once-dominant Netscape Navigator. The US Justice Department and several states ultimately sued Microsoft, accusing it of using its monopoly control over Windows to shut out competitors in other markets. The company fought the charges for years before settling in 2002.

The June 1998 release of Windows 98 was more low-key than the Windows 95 launch, though Microsoft denied it had anything to do with the antitrust case.

Windows 98 had the distinction of being the last with roots to the original operating system, MS-DOS. Each operating system is made up of millions of lines of instructions, or code, written in sections by programmers. Each time there's an update, portions get dropped or rewritten, and new sections get added for new features. Eventually, there's nothing left from the original.

Microsoft came out with Windows Me a few years later, the last to use the code from Windows 95. Starting with Windows 2000, Microsoft worked off the code built for NT, the 1993 system built from scratch.

The biggest release since Windows 95 came in October 2001, when Microsoft launched Windows XP at a hotel in New York's Times Square. Windows XP had better internet tools, including built-in wireless networking support. It had improvements in media software for listening to and recording music, playing videos and editing and organizing digital photographs.

Microsoft's next major release didn't come until Vista in November 2006. Businesses got it first, followed by a broader launch to consumers in January 2007. Coming after years of virus attacks targeting Windows machines and spread over the Internet, the long-delayed Vista operating system offered stronger security and protection. It also had built-in parental-controls settings.

But many people found Vista slow and incompatible with existing programs and devices. Microsoft launched Windows 7 in October 2009 with fixes to many of Vista's flaws.

Windows 7 also disrupted users less often by displaying fewer pop-up boxes, notifications and warnings - allowing those that do appear to stand out. Instead, many of those messages get stashed in a single place for people to address when it's convenient.

In a sign of what's to come, Windows 7 was able to sense when someone is using more than one finger on a touchpad or touch screen, so people can spread their fingers to zoom into a picture, for instance, just as they can on the iPhone.

Apple released its first iPhone in 2007 and the iPad in 2010. Devices running Google's Androidsystem for mobile devices also caught on. As a result, sales of Windows computers slowed down. Consumers were delaying upgrades and spending their money on new smartphones and tablet computers instead.

Windows 8 and its sibling, Windows RT, represent Microsoft's attempt to address that. It's designed to make desktop and laptop computers work more like tablets.

Windows 8 ditches the familiar start menu on the lower left corner and forces people to swipe the edges of the screen to access various settings. It sports a new screen filled with a colorful array of tiles, each leading to a different application, task or collection of files. Windows 8 is designed especially for touch screens, though it will work with the mouse and keyboard shortcuts, too.

Microsoft and PC makers alike had been looking to Windows 8 to resurrect sales. Microsoft's recent launch event was of the caliber given for Windows 95 and XP.

But with Apple releasing two new iPads, Amazon.com shipping full-sized Kindle Fire tablets and Barnes & Noble refreshing its Nook tablet line next month, Microsoft and its allies will face competition that is far more intense than in the heyday of Windows 95 and XP.

Sample Text

Muthukumar. Powered by Blogger.

About Me

My photo
Hi i am Muthu kumar,software engineer.