Tuesday, November 30, 2010

Is Reverse Combustion the Key Alternative Energy Source?


 
  • A new technology from researchers at Princeton is turning carbon dioxide into a viable fuel source.
  • A new technology from researchers at Princeton is turning carbon dioxide into a viable fuel source.These days, everyone from governments to major companies are worried about their carbon footprint and so are turning to alternative energy sources right and left to help reduce the amount of carbon dioxide in the environment.
    What if it were possible, though, to take the carbon dioxide that's already in the environment and turn it into a viable fuel source? Startup company Liquid Light, along with many other research projects, is making headway into creating just such an energy source.
    Princeton graduate student Emily Barton, following up on the work done in the 1990s by Lin Chao, discovered that by taking an electrochemical cell that uses the same semiconductor photovoltaic solar cells use, it's possible to transform carbon dioxide into fuel thanks to sunlight.
    The idea behind this principle isn't a new one—at least not for plants anyway. Photosynthesis takes care of this on a daily basis, and it has for billions of years. The product, naturally, is typically sugars, but research to create liquid fuel as a product could quickly slow the buildup of greenhouse gases, changing the alternative energy picture forever.
    Click here to view a video of the new technology in action.

Monday, November 29, 2010

Motivational Monday

HAPPINESS AT WORK 




Happiness.
Zappos believes in delivering it.  Coke wants you to open it.
Countless books try to help you achieve it
Research demonstrates that you are more productive with it.
And yet the truth is that so few people feel it at work.
Why?
Is it because leaders don’t create “happy” workplaces?
Is it because work is inherently miserable?
Or perhaps it’s because of our attitudes about work?
There’s really no definitive answer here.
A negative boss, bad working conditions or a toxic culture can certainly make people unhappy.
I’ve also seen how leaders can create happier and more productive employees by creating the right culture and work environment.
However, I believe the biggest determinant of our happiness at work is... us.
Our happiness has less to do with forces outside of us and more to do with what’s inside of us.
Happiness is an inside job.
Our happiness comes not from the work we do but from how we feel about the work we do.
I’ve met bus drivers, janitors and fast-food employees who are more passionate about their jobs and happier than some professional athletes making millions of dollars.
The way we think about work, feel about work and approach our work influences our happiness at work.
We can be happier by focusing on what we GET TO do instead of what we HAVE TO do. We can realize that the ability to work is a gift, not an obligation.
We can enjoy our jobs more by creating a new measuring stick. Instead of comparing ourselves to others we can measure ourselves against our own growth and potential. Each day we can come to work with the mindset that today we will be better than we were yesterday and tomorrow we will be better than we are today.
We can also enhance our happiness by tuning out negativity. Gandhi said, “I will not let anyone walk through my mind with their dirty feet,” and neither should we. Instead of listening to the negative voices let us focus on our positive choices. We can’t drive someone else’s bus. We can’t control someone else’s attitude but we can control our mindset. Our job is to drive our bus and make it great. If we focus on the positive and tune out the negative our happiness will soar.
Finally, we can energize our jobs by working for a bigger purpose. The research shows we are most energized when we are using our strengths and talents for a bigger purpose beyond ourselves. Every job will get old and mundane (if we let it). But purpose keeps it fresh. Purpose fuels us. When we work for a bigger purpose we find an endless supply of happiness at work. (This is actually what my next book is about. It’s titled THE SEED and it will be out in May 2011).
Happiness at work.
It’s often allusive but very attainable. Best of all, we decide how happy we want to be. So whether we are delivering happiness, opening happiness, sharing happiness, or creating it, remember that happiness is an inside job that you can bring to work today!
How do you enhance your happiness at work? Share your thoughts and strategies on our blog or Facebook page.
-Jon
Connect with Jon on:

Computer Algorithm Matches Kidney Donors with Those in Need

    Computer Algorithm Matches Kidney Donors with Those in Need
  • A newly developed computer algorithm is building a network of people in need of kidneys and potential donors. The technology could increase the number of kidney transplants, thereby saving thousands of lives per year.
  • A newly developed computer algorithm is building a network of people in need of kidneys and potential donors. The technology could increase the number of kidney transplants, thereby saving thousands of lives per year.By the end of 2009, there were over 86,000 people on the waitlist to receive a kidney donation. According to the Scientific Registry of Transplant Recipients, that waitlist has been increasing steadily each year, while the number of donors has remained relatively constant. Now, a computer algorithm developed by researchers at Carnegie Mellon University is matching transplant recipients with potential living donors. The technology, being used to create a national network, could save lives and lower the risks of organ rejection.
    Kidneys for transplants can be either from living or deceased donors. Most people on the waitlist do not have compatible family members and are waiting for organs from deceased donors. But organs from living donors have higher success rates, since they are often closer matches to recipients' blood and tissue type, and the organs do not need to be transported over long distances.
    Now, the Organ Procurement and Transplantation Network (OPTN), operated by the United Network for Organ Sharing (UNOS), is using a computer algorithm to match kidney recipients with potential donors. Using the technology, the OPTN has started a national pilot program to increase the number of kidney paired-donation (KPD) transplants.

    Although the waitlist for kidneys continues to grow each year, the number of transplant surgeries has remained relatively constant (source: OPTN/SRTR).
    Matching possible donors with recipients is a huge computational task. In 2006, the first algorithm that could successfully perform the equations on a nationwide scale—that is, with up to 10,000 pairs—was created by Tuomas Sandholm and Avrim Blum, both professors of Computer Science at Carnegie Mellon, and then-graduate student David J. Abraham. The algorithm has since been improved by Sandholm, with help from Ph.D. students Pranjal Awasthi, Erik Zawadzki and John Dickerson.
    The main technological problem was computer memory, which could not handle the huge demand of the matching process. To circumvent this difficulty, the researchers' algorithm never records the entire process within the computer's memory. Instead, it only records those parts of the process that turn out to be relevant.

    In a trial run, the process included only 43 kidney transplant candidates and 45 potential living donors. Researchers expect, however, that this national KPD network could eventually be expanded to include up to 20,000 individuals.
    In many situations, a potential kidney donor—usually a family member—is not medically compatible with the intended recipient. Blood and tissue samples must match; otherwise an organ has a high risk of rejection, which could lead to infection and even death.
    With the new technology, the computer creates new potential pairs based on medical information. A two-way exchange, for example, might match the donor from one pair with the recipient from the second, with the second donor donating to the first recipient. The algorithm is even capable of determining three-way exchanges.
    As the national network grows, the chances of finding a potential match will increase.
    "A unified nationwide exchange can yield significantly better solutions than multiple separate exchanges, and it is extremely rewarding that after we have worked on this for six years, the nationwide program is now live," said Sandholm in a statement. Sandholm led the team of researchers in the development of the computer algorithms that finds organ matches.
    On Oct. 27, a trial run of the program successfully matched seven donor-recipient pairs. The technology even found matches for two kidney recipients with particularly rare tissue types.
    "We are grateful both for Dr. Sandholm's expert consulting in developing our national pilot program and for the use of Carnegie Mellon's algorithm," said Charles Alexander, pesident of OPTN/UNOS. "These contributions have helped us develop the program more quickly and at significantly lower cost than we could have achieved otherwise, so we can focus on saving and enhancing lives through kidney paired-donation."
    The computerized matching process will be performed every four to five weeks, as participating transplant programs supply new patient and potential donor information.
    "In the future, kidney exchanges could be made even better by using our newest generation of algorithms that consider not only the current problem but also anticipate donors and patients who might later join the system," Sandholm said. "It can sometimes be best to wait on some of the transplants so that more or better transplants can be found as new pairs enter the system. Our new algorithms figure that out automatically using statistical properties of the blood- and tissue-type distribution of the population to generate possible sequences of additional pairs joining."
     

Kyoto Prize Winner Revolutionizing IT Network Design

    Kyoto Prize Winner Revolutionizing IT Network Design
  • Tools for accurately modeling very large networks—from server farms to wireless sensor nets—are being enabled by the pioneering mathematical tools of this year's Kyoto Prize winner, Laszlo Lovasz. Half-million dollar awards were also made to stem-cell innovator Shinya Yamanaka and artistic groundbreaker William Kentridge.
  • Tools for accurately modeling very large networks—from server farms to wireless sensor nets—are being enabled by the pioneering mathematical tools of this year's Kyoto Prize winner, Laszlo Lovasz. Half-million dollar awards were also made to stem-cell innovator Shinya Yamanaka and artistic groundbreaker William Kentridge.The Kyoto Prize—which has for 25 years aimed to rival the Nobel Prize—this year bestows three $550,000 awards to Laszlo Lovasz for his contributions to information technology (IT); Shinya Yamanaka for discovering that skin, instead of embryos, can be regressed into stem cells; and  William Kentridge for his artistic invention called "drawings in motion."

    Three Kyoto Prizes were awarded this year to Shinya Yamanaka, Laszlo Lovasz and William Kentridge (left to right). 
    The key to accuracy in IT simulations is to define their boundaries—such as the minimum number of servers required for a given latency or the maximum capacity of a wireless network. The mathematical principles to enable such accurate IT simulations are being delineated by Kyoto Prize winner and algorithm pioneer Lovasz. The Hungarian-born naturalized U.S. citizen has applied geometric "graph theory" to many long-standing mathematical problems, and as a result has enabled a new generation of simulators for very large-scale networks.

    Ceremonial globes were presented by Japanese children to Kyoto Prize winners, Kentridge, Lovasz and Yamanaka during the ceremony.
    "I am especially pleased to congratulate Dr. Laszlo Lovasz on receiving the Kyoto Prize this year," said President Barak Obama in a statement read at the Kyoto Prize Ceremony (Obama could not attend the ceremony, since the recent G20 summit was being held simultaneously). "Americans like him have contributed to myriad advancements in mathematical sciences and other fields of study. These efforts help advance all humankind and create a brighter future for all nations."
    Lovasz's mathematical theorems have commentators likening him to a modern-day Claude Shannon—the inventor of IT and recipient of the first Kyoto Prize in 1985. In particular, Lovasz enabled geometry to extend the point-to-point radio links of Shannon's day for the tower-hopping era of modern cellular radio communications. Lovasz used graph theory to place an upper bound on an information channel's "Shannon capacity," which has come to be called its "Lovasz number."
    Other principles useful in IT include "Lovasz' local lemma" and the "LLL-algorithm," which are both used in standard encryption algorithms, such as RSA, as well as for the multiple-input and multiple-output (MIMO) wireless communication techniques used by WiFi, 4G, WiMax and LTE.
    Lovasz served as a senior scientist at Microsoft Research from 1999 to 2006, and is currently a professor at Eotvos Lorand University in Budapest. Next year, he will be returning to the United States, where he will spend a year at Princeton University pioneering new ways of using graph theory to manage very large-scale networks.
    This year, Kyoto Prizes were also bestowed on Japanese medical researcher Shinya Yamanaka, who defused the moral issues surrounding human embryos by showing that skin could be regressed into stem cells too. South African artist William Kentridge also received an award for the invention of his self-described "stone-age technology" that he calls "drawings in motion."
    EDITOR'S NOTE: The author of this story, R. Colin Johnson, also is a winner of the 2010 Kyoto Prize Journalism Fellowship.

Wednesday, November 24, 2010

IT user supergroup forces vendors to take notice

IT user supergroup forces vendors to take notice

Open Data Center Alliance focuses on interoperability

By Jon Brodk


 
 
 
 
 
 
Tech vendors, be warned: The next generation user group has arrived on the scene, and this time its members will force the world's biggest IT companies to listen closely. 

The recently formed Open Data Center Alliance, representing more than $50 billion in IT spending, has loaded its steering committee up with big names like Lockheed Martin, BMW, China Life, Deutsche Bank, JPMorgan Chase, Marriott, the National Australia Bank, Shell and UBS. Terremark, although a vendor, is chairing the steering committee. But as a builder of cloud and hosting services, Terremark is keenly interested in making sure the likes of IBM, HP, Cisco and Microsoft take notice. Chipmaker Intel is also involved with the alliance as a technical advisor. 


As the name suggests, the Open Data Center Alliance wants to ensure interoperability across core networking and cloud technologies, especially those provided by vendors that compete against each other. Technologies and concepts on the agenda include interoperable storage protocols, unified networking, policy-based power management, trusted computing pools and compliance with security requirements, dynamic workload placement, cloud "on-boarding" and provisioning, and flexible licensing models.
The alliance plans to deliver a more detailed roadmap and define an ideal, vendor-agnostic usage model in the first quarter of 2011. Terremark's Marvin Wheeler points to the cell phone industry as a potential model of interoperability. Although smartphones themselves are often restricted to just one of the four major carriers, Wheeler points out "If you look at the cell phone industry there are certain standards of interoperability that all the manufacturers adhere to. I could be in Miami, Fla., using Verizon as my cell carrier with a Motorola phone and send you a video clip, and you're in New York City using AT&T and a Samsung phone, and the video clip will work just fine on your phone."

This type of interoperability hasn't prevented the cell phone industry from being highly innovative, as devices like iPhones and Androids prove, Wheeler notes.
But in the data center industry and emerging cloud-hosting services, ensuring that technologies from different vendors work together and allow portability of workloads and data across different platforms is easier said than done.
Vendors are always willing to use the latest buzzwords, even if their actions don't justify the advertising. Vendors are slapping the words "green" and "cloud" on just about any product and saying "we're cloud-friendly and we're green-friendly," says Andrew Feig, executive director of the UBS technology advisory group and a member of the Open Data Center Alliance steering committee.
But tying the whole technology stack together without forcing customers to buy from just one vendor is difficult at best today, he says.
One example is power management. Vendors have introduced different implementations for controlling and monitoring the power usage of data center equipment, when what customers really need are standards and common methods of managing energy use throughout the data center and entire building infrastructure, Feig says.
Management of servers is another area that lacks integration among vendors, Feig says. When a customer buys a new server "it's never plug-and-play," he says. "There is a fair amount of work. It's usually us changing the way we do things, and not the vendors."

UBS has a private cloud to deliver more flexible access to compute capacity internally, but doesn't limit itself to one or just a few vendors. "We have stuff from everybody," Feig says.
The trend of vendors providing integrated appliances with servers, storage, networking and virtualization capabilities included isn't necessarily a bad thing, even though it's a single-vendor approach, but is more expensive up front than building a technology stack piece by piece, Feig says. Another issue to be tackled by the Open Data Center Alliance is so-called "on-boarding" workloads and customers onto private and public cloud services and preventing lock-in to a specific service.
Feig expressed confidence that the problem of moving virtual machines from one hypervisor to another will get solved. But, he adds, virtualization is relatively mature compared to the other technologies and processes used by cloud services. For customers, it could end up costing a "fortune" to integrate with cloud providers like Amazon, Google and Microsoft and ensure that workloads can move from one cloud to another, he said.


Common types of security controls and billing infrastructures will also be needed to ensure cloud interoperability, says Curt Aubley of Lockheed Martin, which is the largest IT provider to the federal government, and therefore both a consumer and provider of technology. Aubley is president of the Open Data Center Alliance and a CTO at Lockheed.
Some of Lockheed's government customers have begun using public cloud services for application development and delivery, although Lockheed itself has built an internal cloud for its own users. To ensure interoperability, the vendors will have to agree on considerations that are both technical- and business-oriented, Aubley says.
In today's cloud market, Aubley says, "It's almost like the old days where if you wrote your application on a mainframe it didn't work on a Unix computer very well … and if you wrote an app on a Unix computer it wouldn't work on a Windows box."
Although technology user groups tend to be small and lack widespread visibility, Feig is confident the Open Data Center Alliance will defy that trend by bringing together giant users and focusing on broad industry trends rather than individual products.
"The big difference is this one is user driven," he says. "It's very focused on problems and the solutions to the problems, rather than products."

Tuesday, November 23, 2010

3 Reasons Tech Recovery Is Stalling

    3 Reasons Tech Recovery Is Stalling
  • Global semiconductor revenue in 2011 will likely grow only about 5 percent, compared with 30 percent in 2010, muting expectations for the rest of the IT sector. Analysts' reasons for this trend are threefold: stubborn unemployment, tight credit availability and the lack of recovery in the housing market.
  • Global semiconductor revenue in 2011 will likely grow only about 5 percent, compared with 30 percent in 2010, muting expectations for the rest of the IT sector. Analysts' reasons for this trend are threefold: stubborn unemployment, tight credit availability and the lack of recovery in the housing market.Market analysts at iSuppli Corp. (El Segundo, Calif.) recently predicted that the spectacular semiconductor market recovery in 2010 was cooling, on track for a modest 5.1 percent gain in 2011, compared with the meteoric 32 percent increase predicted for 2010. The reasons, however, are not directly related to semiconductors, fueling predictions that IT sector growth will be stunted next year too.
    Depleted inventories from recession-induced belt-tightening combined with stronger-than-expected consumer demand in 2010 prompted stellar 32 percent semiconductor market growth in 2010—up to $302 billion from just $205 billion in 2009, according to iSuppli. However, now that inventories have been renewed, three more general economic problem areas will mute growth next year, slowing semiconductor buying trends to just 5.1 percent growth in 2011—up just $15.4 billion to $317.4 billion. Slow steady growth will continue in 2012, according to iSuppli, reaching $357.4 billion by 2014.
    IT sector growth may cool too, due to the same three bogeymen: stubborn unemployment, tight credit and continued softness in the housing market. Those factors are inhibiting consumer spending, resulting in a softening of demand and muted growth across the board—factors that will likely be mimicked by the IT sector at least through the first quarter of 2011.
    The good news is that technology markets have already regained the ground lost during the recession, and steady growth will likely continue for the foreseeable future (see figure below). IT markets, for instance, have rebounded from their dismal performance in 2009 to achieve levels that have already exceeded their previous peaks in 2007 of about $272 billion. Computer systems and peripherals experienced the biggest boost, accounting for 40 percent of the regained ground, according to iSuppli. PCs also have rebounded, gaining 22.8 percent over the same period in 2009. Wireless has rebounded as well, accounting for 20 percent of the total growth—a trend that is likely to continue as far as 2014. The remaining rebound sector was consumer electronics, holding 19 percent share of the newfound total market.

    Tech sector rebound in 2010 has already regained the lost ground from the 2008 recession, but growth rate will cool to a sustainable 5 percent until 2014.
    The bad news is that all these markets will be affected by replenished inventories and low consumer confidence levels, both prompted by the big three: jobless recovery, tight credit and poor housing market prospects. Even the bell-weather flat-panel television segment is weakening in the fourth quarter of 2010, resulting in inventory build-ups in both the United States and Asia.
    Likewise, data center equipment—which consumes its fair share of semiconductor microchips—will dip to single-digit growth, along with communication equipment and even automotive electronics, which account for 9 percent and 6 percent of the total market, respectively, so says iSuppli in its latest report titled "Semiconductor Revenue Growth Targets Soft Landing Following 2010 Boom."
     

Monday, November 22, 2010

Speed up Linux: No kernel patch required

Speed up Linux: No kernel patch required


Want to speed up your Linux desktop without compiling a new kernel? You don't need a 200-plus line patch for the Linux kernel when a couple of lines of Bash will do the trick [1].
A few days ago a kernel developer posted a patch to the Linux kernel [2] that changes the way the Linux "scheduler" works. For non-geeks, this is the way that the kernel hands off tasks to the CPU. This has been a topic of a lot of debate over the years, with kernel developers proposing dueling schedulers and sometimes storming off [3] when their proposal was rejected.
So there was a lot of buzz and excitement when the patch from Mike Galbraith, clocking in around 225 lines of code, showed a dramatic improvement in desktop latency. All is well and good that the patch works, but it would be a long time before most Linux users would see an update. It won't be for a few weeks before it makes it into the mainline kernel, and six or seven months before it trickles down to users. Some users are willing and able to recompile their kernel, or willing to install patched kernels from third-party sources, but most users don't fall into those categories.
Turns out, users don't have to wait if they're willing to make a few small modifications to their systems involving a few lines of Bash code [1] added to a system configuration file (/etc/rc.local) and a user's login file (.bashrc). That comes from Red Hat's Lennart Poettering.
See the post on Web Upd8 [1] for instructions on Fedora and Ubuntu machines. I've tried the second method on a machine running Linux Mint 10 (which is Ubuntu 10.10 based). In decidedly unscientific testing, it seems to produce an improvement in several areas — particularly when using Firefox or Chrome. I haven't yet tried the kernel patch yet, but according to Markus Trippelsdorf [4], the user-space changes reduce latency more.
The immediate effect may be a speedup on the desktop for Linux users based on Poettering or Galbraith's approach. But even better, maybe this will kick off a new round of competing ideas on speeding up the Linux desktop.

Motivational Monday

THE POWER OF THANK YOU


In the spirit of Thanksgiving I'd love to share with you the benefits and power of two simple words. THANK YOU.
They are two words that have the power to transform our health, happiness, athletic performance and success. Research shows that grateful people are happier and more likely to maintain good friendships. A state of gratitude, according to research by the Institute of HeartMath, also improves the heart's rhythmic functioning, which helps us to reduce stress, think more clearly under pressure and heal physically. It's actually physiologically impossible to be stressed and thankful at the same time. When you are grateful you flood your body and brain with emotions and endorphins that uplift and energize you rather than the stress hormones that drain you.
Gratitude and appreciation are also essential for a healthy work environment. In fact, the number one reason why people leave their jobs is because they don't feel appreciated. A simple thank you and a show of appreciation can make all the difference.
Gratitude is like muscle. The more we do with it the stronger it gets. In this spirit here are 4 ways to practice Thanksgiving every day of the year.
1. Take a Daily Thank You Walk - I wrote about this in The Energy Bus. Take a simple 10-minute walk each day and say out loud what you are thankful for. This will set you up for a positive day.
2. Meal Time Thank You's - On Thanksgiving, or just at dinner with your friends and family, go around the table and have each person, including the kids at the little table, say what they are thankful for.
3. Gratitude Visit - Martin Seligman, Ph.D., the father of positive psychology, suggests that we write a letter expressing our gratitude to someone. Then we visit this person and read them the letter. His research shows that people who do this are measurably happier and less depressed a month later.
4. Say Thank You at Work - Doug Conant, the CEO of Campbell Soup, has written over 16,000 thank you notes to his employees and energized the company in the process. Energize and engage your co-workers and team by letting them know you are grateful for them and their work. And don’t forget to say thank you to your clients and customers too.
I hope you have a wonderful Thanksgiving. I’m thankful for YOU.
- Jon
Blog QuestionWhat are you thankful for? Share one or two things that you are thankful for in your life... Post a note on our blog or Facebook page.

3D Printers Could Print Space Station Parts in Orbit



  • Space stations and satellite components printed from in-orbit 3D printers might be the future of space exploration. At least, that’s the hope of one new tech company.
  • Space stations and satellite components printed from in-orbit 3D printers might be the future of space exploration. At least, that’s the hope of one new tech company.

       
    Lately at Smarter Technology, we’ve been blogging about the 3D printers that seem to be the hottest tech trend here on Earth. Now, a new company hopes to bring this technology to space, with orbiting 3D printers that churn out inexpensive parts for space stations, satellites and more.
    The company, called Made in Space, hopes to launch 3D printers into space, where they could save time and increase the efficiency of aeronautical research. During a conference entitled “Space Manufacturing 14: Critical Technologies for Space Settlement,” held at NASA’s research center in Ames, Calif., the company discussed its ideas and plans.
    "It makes perfect sense that we should build everything for space, in space," said Jason Dunn, one of the founders of Made in Space.
    Dunn explained that products made in space would not need to withstand the g-forces and vibrations produced during launches from Earth. The mass of individual components could thus be reduced by nearly 30 percent—translating to lower costs and less use of valuable resources. A reduced mass also means less fuel would be needed.
    All that engineers on Earth would need to launch is the feedstock for the printers—material Dunn describes as “gray goo,” which can be supplied from metal, plastic and other materials.
    Orbiting printers could also allow for the recycling of broken parts. If a component were to break down, it could be melted back into feedstock and reprinted in space. This process would save money and time by reducing trips to and from Earth.

    Thinking beyond the orbit, the company envisions using 3D printers to help establish colonies on the moon, Mars and other planets. Printers could help build greenhouse structures, buildings and other necessary extraterrestrial infrastructure.
    Because some 3D printers are already able to use concrete, Adam Ellsworth, a scientific adviser with Made in Space, thinks that lunar regolith—the soil found on the moon—could work as feedstock for the printers. Metallic sources can also be found on the moon and other planets.
    "You can just bring the files of the tools, and the files of the parts," Dunn said.
    According to Ellsworth, the company has already succeeded in printing space-grade plastic components. The next step will be to test how 3D printers perform in zero gravity, which Made in Space hopes to undertake in the next six months. To create a weightless environment, the company may use suborbital crafts currently under development.
    If the printers are successful in zero gravity, the company will begin in-orbit testing, possibly onboard the International Space Station.
    3D printers have long been known for their ability to print small parts. The company is now looking to prove their agility at printing larger components, such as long beams.
    "There's definitely a lot of interest in what we're trying to do," Dunn said. Made in Space, already attracting interest from both the public and private sectors, is currently seeking investors.
     

Monday, November 15, 2010

New Molecule Works as Heat Battery

    New Molecule Works as Heat Battery
  • A new compound recently discovered by researchers at MIT can repeatedly store and release thermal energy without any degradation. The discovery could lead to a wide range of energy storage and retrieval solutions.
  • A new compound recently discovered by researchers at MIT can repeatedly store and release thermal energy without any degradation. The discovery could lead to a wide range of energy storage and retrieval solutions.A paper (abstracted here) in the latest edition of Angewandte Chemie, submitted by an MIT team under the direction of Professor Jeffrey Grossman and funded by the NSF and MIT's Energy Initiative, describes the discovery of a new compound whose molecules can repeatedly store and release thermal energy without degradation.
    When heated, the substance—fulvalene diruthenium—assumes a new, highly stable shape that persists upon cooling, and—when the molecule is reheated or exposed to a catalyst—reverts to release stored energy.
    Fulvalene diruthenium appears more energetically efficient than current methods used to store solar heat, such as mass pools of molten salt. Meanwhile, the compound's ability to store heat long-term, then release enough heat (up to 200 degrees C) to both continually trigger its own exothermic behavior (in effect, working as a fuel) and to perform serious work (e.g., superheating water to produce steam) offers big benefits that engineers can exploit in creating a wide range of energy storage and retrieval solutions.
    The current hitch is price: The relatively simple molecule depends for its behavior on properties of the rare element ruthenium. The MIT team, however, is now undertaking a second phase of research to determine if other, lower-cost molecules exhibit similar behavior, and whether these can be used to synthesize compounds with similar properties.

Monday, November 01, 2010

Are your skills ready for the opportunities in networking?

Are your skills ready for the opportunities in networking?

IT Best Practices Alert By Linda Musthaler
Musthaler

"The future's so bright I gotta wear shades?"  Well, pull out your Oakleys, my friend, because career prospects are looking up for network professionals, and hiring managers are looking for skilled people.
According to the fourth quarter 2010 Robert Half Technology IT Hiring Index and Skills Report, the technical skill set most in demand by a majority of CIOs is networking. What's more, network managers will see average starting salaries rise 4.3 percent, to the range of $79,250 to $109,500 per year, according to the Robert Half Technology Salary Guide 2011.
A few months ago, Denise Dubie identified "network engineer" as one of the 10 best IT jobs right now. Dubie's article cites research from Gartner that indicates skills in networking, voice and data communications technologies will continue to be in demand. "The future of IT and enhanced competitive advantage requires social interactions and greater collaboration and that is why the importance of the network continues to grow," says Mark McDonald, group vice president and head of research, Gartner Executive Programs. "Even though revenue was down in 2009, CIOs reported that transaction volumes and communications requirements continued to grow, making it imperative to focus on network technologies."
So there you have it. The industry experts agree that the career opportunities are there if you have the right networking skills, and now you have more choices than ever on how to "skill up."
Last week at Interop New York, the HP Networking group announced an entirely new portfolio of networking certifications available through the HP ExpertONE program. The new certifications span the entire data center, from the edge to the core, and include specialization areas such as video, security and wireless.
All of HP's networking products and services were completely overhauled following the company's acquisition of 3Com, which also included H3C and TippingPoint networking products. Once combined with HP's own ProCurve brand, this broad range of new networking products necessitated the development of all new certifications. The majority of the new certifications launch on Nov.1.
The new HP ExpertONE program has an emphasis on skilling people to work in data centers built on the Converged Infrastructure architecture, and the new networking certifications are no exception. Candidates for HP certification can expect to receive an education about what the underlying technologies deliver; how the products work together; how the products form architectures that solve relevant contemporary challenges; and how it all links holistically with the other pillars of a converged data center, that being compute, storage and management.  HP also places emphasis on how to build an infrastructure based on open standards and in a multivendor environment.
HP offers training and certification at various levels. Those levels and the corresponding certifications include:
Advanced level
HP Master Accredited Systems Engineer (MASE) -- Network Infrastructure
HP Master Accredited Systems Engineer (MASE) -- Wireless Networks
Intermediate level
HP Accredited Systems Engineer (ASE) -- Network Infrastructure
HP Accredited Systems Engineer (ASE) -- Wireless Networks
HP Accredited Systems Engineer (ASE) -- IP Telephony
HP Accredited Sales Consultant (ASC) -- Networking
HP Accredited Sales Consultant (ASC) -- Enterprise Networking
Foundational level
HP Accredited Integration Specialist (AIS) -- Network Infrastructure
HP Accredited Integration Specialist (AIS) --Network Security
Entry level
HP Accredited Sales Professional (ASP) -- IP Telephony
HP Accredited Sales Professional (ASP) -- Networking
HP Accredited Presales Professional (APP) -- Network Security
HP Accredited Presales Professional (APP) -- S-Series Networking Products
Networking professionals who already have certifications from Cisco and other vendors can become HP certified via a fast track. "We know that people who have other vendors' certifications already know networking protocols," says Mike Banic, vice president of marketing for HP Networking. "What they need to learn when they come to the HP ExpertONE program is HP's approach to networking, and how to build a network based on open standards. A fast track path lets them skip the basics and move right into what they need to know."
Get more information about HP networking certifications here.
Also last week, Cisco announced new or refined certifications that more closely align with employers' expected needs over the next three to five years. These certifications focus on what a person would do on a day to day basis in the areas of security and voice.
The Cisco Certified Network Professional (CCNP) Security program aligns to the specific job role and responsibilities of the network security engineer who is tasked with testing, deploying, configuring and troubleshooting the core technologies and devices that make up network security. Technologies covered include the Cisco IOS security features built into Cisco routers and switches, zone-based firewalls, high-availability virtual private networks, and intrusion detection and prevention systems.
The revamped CCNP Voice certification (formerly CCVP) validates the advanced knowledge and skills required to integrate voice and Cisco Unified Communications solutions into underlying network architectures. It also validates a robust set of skills for implementing, operating, configuring and troubleshooting a converged Internet Protocol network.
These new or updated networking certifications (and their associated training) from both HP and Cisco are designed to develop and hone the skills that are so in demand today and into the next few years. Just remember to take off your shades before you go to class.

Motivational Monday


Thought for the Day
 
November 1, 2010
WHEN YOU ASK ANOTHER PERSON TO DO SOMETHING, IT MAY HELP BOTH HIM AND YOU IF YOU TELL HIM WHAT TO DO, WHY HE SHOULD DO IT, WHEN HE SHOULD DO IT, WHERE HE SHOULD DO IT, AND HOW HE MAY BEST DO IT.
We are all influenced by our background and experience. We perceive instructions in the context of our education, experience, heritage, the culture of our organization, and a number of other variables. Good managers know this, and they make sure that their instructions are clear, concise, and well understood. They also know that they must walk a fine line between conveying adequate instructions and killing workers’ incentive by not allowing them sufficient latitude to do their jobs. You may strike the right balance between instruction and motivation by encouraging employees to participate in setting objectives for themselves and their team, helping them develop a plan for achieving their goals, and by making sure that each individual clearly understands the team’s mission and his or her role in achieving it. Suggest that team members check in occasionally to report their progress, then get out of their way and cheer them on to victory.
This positive message is brought to you by the Napoleon Hill Foundation. Visit us at http://www.naphill.org. We encourage you to forward this to friends and family. They can sign up for this free service at our web site.