Categories
technews

94 % of World’s Top 500 Supercomputers runs on linux

Linux tends to dominate as the operating system of choice on the world’s fastest supercomputers.

A full 469, or 94 percent, of the top 500 supercomputers now run Linux, according to the Top500 report.

Meanwhile, only three of the world’s top supercomputers in this latest report — ranking at No. 132, 165 and 183, respectively — run Windows.

The Top500 list is compiled by Hans Meuer of the University of Mannheim, Germany; Erich Strohmaier and Horst Simon of Lawrence Berkeley National Laboratory; and Jack Dongarra of the University of Tennessee, Knoxville.

Speed and Efficiency

Taking the crown for the No. 1 spot in this latest list is Titan, a Cray XK7 system installed at the Oak Ridge National Laboratory by the U.S. Department of Energy. Titan achieved 17.59 petaflops on the Linpack benchmark using 261,632 of its Nvidia K20x accelerator cores.

Interestingly, Titan is also one of the most energy-efficient systems on the list, consuming a total of 8.21 megawatts and delivering 2,143 megaflops per watt.

Slipping to the No. 2 spot in this latest report, meanwhile, was Lawrence Livermore National Laboratory’s Sequoia, an IBM BlueGene/Q system that was top of the list in June. With 1,572,864 cores, Sequoia is the first system to top one million cores.

Rounding out the top five systems are Fujitsu’s K computer installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan (No. 3); a BlueGene/Q system named Mira at Argonne National Laboratory (No. 4); and a BlueGene/Q system named JUQUEEN at the Forschungszentrum Juelich in Germany (No. 5), which is now the most powerful system in Europe.

Intel in the Lead

Looking at the data in geographical terms, the United States is the leading consumer of high-performance computing (HPC) systems, with 250 of the 500 systems on the list. Asia comes next, with 124 systems, followed by Europe, with 105.

Regarding vendors, meanwhile, Intel continues to provide the processors for the largest share of the Top500, accounting for 75.8 percent of the list. A full 84 percent of the systems included use processors with six or more cores; 46 percent tap eight or more cores. A total of 62 systems on the list use accelerator/coprocessor technology.

Full details on the new report can be found on the Top500 site. To mark the 20th anniversary and the 40th edition of the list, a special poster display is being featured at the SC12 conference (Booth 1925) this week in Salt Lake City.

 

Categories
Redhat

Red Hat a short history…

In 1993 Bob Young incorporated the ACC Corporation, a catalog business that sold Linux and Unix software accessories. In 1994 Marc Ewing created his own Linux distribution, which he named Red Hat Linux (Ewing had worn a red Cornell University lacrosse hat, given to him by his grandfather, while attending Carnegie Mellon University). Ewing released the software in October, and it became known as the Halloween release. Young bought Ewing’s business in 1995, and the two merged to become Red Hat Software, with Young serving as chief executive officer (CEO).

Red Hat went public on August 11, 1999, achieving the eighth-biggest first-day gain in the history of Wall Street. Matthew Szulik succeeded Bob Young as CEO in December of that year.

On November 15, 1999, Red Hat acquired Cygnus Solutions. Cygnus provided commercial support for free software and housed maintainers of GNU software products such as the GNU Debugger and GNU Binutils. One of the founders of Cygnus, Michael Tiemann, became the chief technical officer of Red Hat and by 2008 the vice president of open source affairs. Later Red Hat acquired WireSpeed, C2Net and Hell’s Kitchen Systems.

In February 2000, InfoWorld awarded Red Hat its fourth consecutive[13] “Operating System Product of the Year” award for Red Hat Linux 6.1. Red Hat acquired Planning Technologies, Inc in 2001 and in 2004 AOL’s iPlanet directory and certificate-server software.

Red Hat headquarters in 2011
Red Hat moved its headquarters from Durham, NC, to N.C. State University’s Centennial Campus in Raleigh, North Carolina in February 2002. In the following month Red Hat introduced Red Hat Linux Advanced Server,[14][15] later renamed Red Hat Enterprise Linux (RHEL). Dell, IBM, HP and Oracle Corporation announced their support of the platform.

In December 2005 CIO Insight magazine conducted its annual “Vendor Value Survey”, in which Red Hat ranked #1 in value for the second year in a row. Red Hat stock became part of the NASDAQ-100 on December 19, 2005.

Red Hat acquired open-source middleware provider JBoss on June 5, 2006 and JBoss became a division of Red Hat. On September 18, 2006, Red Hat released the Red Hat Application Stack, their first stack that integrates the JBoss technology and of which is certified by other well-known software vendors. On December 12, 2006, Red Hat moved from NASDAQ (RHAT) to the New York Stock Exchange (RHT). In 2007 Red Hat acquired MetaMatrix and made an agreement with Exadel to distribute its software.

On March 15, 2007, Red Hat released Red Hat Enterprise Linux 5, and in June acquired Mobicents. On March 13, 2008, Red Hat acquired Amentra, a provider of systems integration services for service-oriented architecture, business process management, systems development and enterprise data services. Amentra operates as an independent Red Hat company.

On July 27, 2009, Red Hat replaced CIT Group in Standard and Poor’s 500 stock index, a diversified index of 500 leading companies of the U.S. economy. This has been reported as a major milestone for Linux.

On December 15, 2009, it was reported that Red Hat will pay $8.8 million to settle a class action lawsuit related to the restatement of financial results from July 2004. The suit had been pending in a U.S. District Court in North Carolina. Red Hat reached the proposed settlement agreement and recorded a one-time charge of $8.8 million for the quarter that ended Nov. 30. The agreement is pending court approval.

On January 10, 2011, Red Hat announced that it would expand its headquarters in two phases, adding 540 employees to the Raleigh operation. The company will invest over $109 million. The state of North Carolina is offering up to $15 million in incentives. The second phase involves “expansion into new technologies such as software virtualization and technology cloud offerings”.

On August 25, 2011, Red Hat announced it would move about 600 employees from the N.C. State Centennial Campus to Two Progress Plaza downtown. Progress Energy plans to vacate the building by 2012 if its merger with Duke Energy is completed. Red Hat also plans to rename the building.

Notably, Red Hat became the first one-billion dollar open source company in its fiscal year 2012, reaching $1.13 billion in annual revenue.[1]

Please visit Red Hat’s website: RedHat.com

Document Courtesy :www.linux.org

 

Categories
technews

Google Unveils $199 laptop

Google has launched Acer C7 Chromebook for $199.

Specifications are:

  • 11.6’’ (1366×768) display
  • 1 inch thin – 3 lbs / 1.4 kg
  • 3.5 hours of battery
  • Dual-core Intel® Celeron® Processor
  • 100 GB Google Drive Cloud Storage2 with 320 GB Hard Disk Drive
  • Dual band Wi-Fi 802.11 a/b/g/n and Ethernet
  • HD Camera
  • 3x USB 2.0
  • 1x HDMI Port, 1x VGA port

Click here to view more images for the chromebook

Categories
Linux

Amazon Glacier: Cloud Storage For Archives And Backups Launched

Amazon web services (AWS) launched a new service called Amazon Glacier. You can use this service for archiving mission-critical data and backups in a reliable way in an enterprise IT or for personal usage. This service cost as low as $0.01 (one US penny, one one-hundredth of a dollar) per Gigabyte, per month. You can store a lot of data in various geographically distinct facilities and verifying hardware or data integrity, irrespective of the length of your retention periods. The first thing comes to mind is, the Glacier would be a good place for a backup off family photos and videos from my local 12TB nas.

More about Glacier service

  1. Secure – Amazon Glacier supports secure transfer of your data over Secure Sockets Layer (SSL) and automatically stores data encrypted at rest using Advanced Encryption Standard (AES) 256, a secure symmetric-key encryption standard using 256-bit encryption keys. If you are really paranoid just use GPG to encrypt your files before you upload them.
  2. Durable – Amazon Glacier is designed to provide average annual durability of 99.999999999% for an archive.
  3. Simple – Amazon Glacier allows you to offload the administrative burdens of operating and scaling archival storage to AWS, and makes retaining data for long periods, whether measured in years or decades, especially simple.

Glacier is not the backup options

Glacier is an archive product. For example, my personal home nas server is good enough to recover from various accidents. But, it will not work if entire array or all drives died. In that situation, I can retrieve my photos/videos and other data from Glacier. I’ve tons of old invoices and other business data that I need to keep for 10 years due to legal obligation and Glacier is the perfect product for me. From the amazon blog:

a) If you are part of an enterprise IT department, you can store email, corporate file shares, legal records, and business documents. The kind of stuff that you need to keep around for years or decades with little or no reason to access it.

b) If you work in digital media, you can archive your books, movies, images, music, news footage, and so forth. These assets can easily grow to tens of Petabytes and are generally accessed very infrequently.

c) If you generate and collect scientific or research data, you can store it in Glacier just in case you need to get it back later.

In short, if your house or data center has burned down and you need all your data back, you need this kind of service.

But, how durable is my data? Can I use the service for 20 years without loosing a single file?

Amazon claims that, Glacier will store your data with high durability and the service is designed to provide average annual durability of 99.999999999% per archive. Behind the scenes, Glacier performs systematic data integrity checks and heals itself as necessary with no intervention on your part. There’s plenty of redundancy and Glacier can sustain the concurrent loss of data in two facilities. From the s3 faq:

Amazon S3 is designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.

Cost

Glacier is available for use today in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), Asia Pacific (Tokyo) and EU-West (Ireland) Regions. For pricing details, see the Amazon Glacier website here.

Shared via visit NIXCRAFT

Categories
Linux

Making a high secured Linux server

Data or application security is one of the major headaches faced as a system Administrator. In my blog am trying to share a few points which make a server less vulnerable to a hack.

  1. Remove all unwanted Software: remove/disable all unnecessary services or package installed in the server, you can remove unnecessary package from your server using yum or even rpm -e commands. To configure yum locally in your machine read this link
  2. Keep the Kernel and Software up to date: Make sure you are running on the latest kernel available and all the software packages you are using are the latest one. if you using yum, the latest update can be obtained by giving #yum update(redhat based systems) and #apt-get update (debian based systems), in the terminal.
  3. Encrypt communication in and out of the server:  Encrypt the data send in and out of the server using password or user/machine certificates, It is recommended not to use simple FTP, telnet, rlogin etc, because these protocol communications can be easily captured by a network sniffing tool. Suggested methods of communication are through scp, ssh etc.
  4. SeLinux: strongly recommend using SELinux which provides a flexible Mandatory Access Control (MAC). Under standard Linux Discretionary Access Control (DAC), an application or process running as a user (UID or SUID) has the user’s permissions to objects such as files, sockets, and other processes. Running a MAC kernel protects the system from malicious or flawed applications that can damage or destroy the system. See the official Redhat documentation which explains SELinux configuration.
  5. User accounts and Strong password Policy: Enforce strong password creation across the network, set password aging and force user to change their passwords at regular intervals, restrict usage of the same password again and again, enable locking user accounts after continuous login failures.
  6. Disable root login: Never ever login as root user. You should use sudo to execute root level commands as and when required. sudo does greatly enhances the security of the system without sharing root password with other users and admins. sudo provides simple auditing and tracking features too.
  7. Find and block/ close all unwanted listening ports:Use iptables to close open ports or stop all unwanted network services using above service and chkconfig commands.
  8. And last and important, Physical Server Security: Protect access to  physical server and even disable booting from USB, CD/DVD’s. Set BIOS and GRUB loader passwords. And if possible, make sure all productions servers can be kept isolated and locked.
Categories
Databases

Connecting to a remote Oracle Database.

Local database

Connecting to a local database is easy, just use:

$ sqlplus dbUser/dbPassword@dbSid
Here’s the  syntax for connecting to a remote database using its SID:

$ sqlplus dbUser/dbPassword@'(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)
(HOST=remoteServer)(PORT=1521)))(CONNECT_DATA=(SID=dbSid)))’

Categories
Linux

How to add a new hard disk to a Linux operating System?

Once the new harddisk is connected, you can verify the connected harddisk by  issuing the below comment.
$fdisk -l

see the screenshot below.

 

 

 

 

 

 

 

As you can see above the second hard disk is detected as “/dev/sdb”

Steps for creating a new ext3 partition
1. #fdisk /dev/sdb
type “m” to see all the options available.
use “p” to print the partition table.
Now use “n” to create a new partition

 

 

 

 

it will be prompted to select

1. extended partition(e) or
2. primary partition(p)

Select the partition type based on your requirement, and press enter

Then enter the partition number, default will be 1, Then press “Enter”
Enter the desired size for the partition for example, if you want to give 5GB, you can enter +5000MB.

For saving and exit the fdisk menu enter “w”

2. Verify the created partition by issuing “fdisk -l” again.

output below

********************************************************************************
[root@redhatdemo ~]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 2610 20860402+ 8e Linux LVM

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 609 4891761 83 Linux

********************************************************************************

you can see the new partition “/dev/sdb1”

3. Creating filesystem

#mke2fs -j /dev/sdb1

We can use mke2fs command to create a new ext3 filesystem, “-j” flag is used to enable journaling, even we can continue without enabling journaling.

4. Partition Labelling

Newly created partition can be named using the e2label command

#e2label /dev/sdb1 /newhome

5. Mounting filesystem during startup.

a. create a folder under root as “/newhome”

b. add the below entry to /etc/fstab

LABEL=/newhome /newhome ext3 defaults 1 2

save and exit.

6. Reboot and enjoy. 🙂

 

 

 

 

 

Categories
Linux

How do I list connected USB devices in linux?

In all flavors of Linux, we can use a utility called “lsusb” to get information on various USB devices connected.

$ lsusb

The above command lists devices connected to the Linux kernel. See the screenshot attached below.


 

You can use,

$ lsusb -v

to get more detailed information on the devices connected to the system.

Categories
Linux

Install Subversion with Web Access on Ubuntu

To install subversion, open a terminal and run the following command:

sudo apt-get install subversion libapache2-svn

We’re going to create the subversion repository in /svn, although you should choose a location that has a good amount of space.

sudo svnadmin create /svn

Next we’ll need to edit the configuration file for the subversion webdav module. You can use a different editor if you’d like.

sudo gedit /etc/apache2/mods-enabled/dav_svn.conf

The Location element in the configuration file dictates the root directory where subversion will be acessible from, for instance: https://www.technix.in/svn

<Location /svn>

The DAV line needs to be uncommented to enable the dav module

# Uncomment this to enable the repository,
DAV svn

The SVNPath line should be set to the same place your created the repository with the svnadmin command.

# Set this to the path to your repository
SVNPath /svn

The next section will let you turn on authentication. This is just basic authentication, so don’t consider it extremely secure. The password file will be located where the AuthUserFile setting sets it to…  probably best to leave it at the default.

# Uncomment the following 3 lines to enable Basic Authentication
AuthType Basic
AuthName “Subversion Repository”
AuthUserFile /etc/apache2/dav_svn.passwd

To create a user on the repository use, the following command:

sudo htpasswd2 -cm /etc/apache2/dav_svn.passwd <username>

Note that you should only use the -c option the FIRST time that you create a user. After that you will only want to use the -m option, which specifies MD5 encryption of the password, but doesn’t recreate the file.

Example:

sudo htpasswd2 -cm /etc/apache2/dav_svn.passwd geek
New password:
Re-type new password:
Adding password for user geek

Restart apache by running the following command:

sudo /etc/init.d/apache2 restart

Now if you go in your browser to http://www.server.com/svn, you should see that the repository is enabled for anonymous read access, but commit access will require a username.

If you want to force all users to authenticate even for read access, add the following line right below the AuthUserFile line from above. Restart apache after changing this line.

Require valid-user

Now if you refresh your browser, you’ll be prompted for your credentials:

 

Categories
Linux Redhat

RedHat Enterprise Linux (RHEL 5 installation guide)

RHEL 5 can be installed using a variety of methods, like

  • DVD/CD ROM
  • Harddrive
  • NFS
  • FTP
  • HTTP

Here we are going to discuss RHEL5 installation using the first method (by using a RHEL5 DVD)

As a starting procedure, boot your server from the DVD drive, the installation program then probes our system and tries to identify the installation media, and one the DVD drive driver is loaded, we will be presented with and option of checking the media in the DVD rom, this step can be skipped. From the media check dialog, continue to the next stage of the installation process.


Press “Enter” to continue installation in GUI mode.