Linux Journal October 2011
Message queues are a dead-simple idea, particularly if you're familiar with queues as data structures. Distributed message queues can be quite difficult to get to work in a distributed and persistent way, but Amazon has done just that and makes its queue available for a very reasonable price, often ending up free for small organizations and sites. The advantages that a distributed message queue can bring to the table are overwhelming though, particularly when you have tasks or pieces of data that are coming in too rapidly to handle, but which could be processed by a large number of back ends.
Indeed, that's what I'm doing on my current project, and it has been working like a charm. Now, there are issues with Amazon's queues. For starters, they have longer latency than you would get with a local queue, and they also are sitting on third-party servers, which might not sit well with some companies. But for the most part, it has worked without a hitch and has become a core part of the infrastructure on my project.
During the course of this work. I've started to find all sorts of uses for message queues, and I'm starting to incorporate them into other projects on which I work. The day may come when it's an exceptional project that doesn't use a message queue, rather than the other way around. Lerner is a longtime Web developer, architect and trainer. He is a PhD candidate in learning sciences at Northwestern University, researching the design and analysis of collaborative on-line communities.
Reuven lives with his wife and three children in Modi in. This site, like all the other Amazon Web Services sites, has extensive documentation, tutorials and examples, as well as forums that let developers help one another. He also explains the philosophy behind the project and finds that Twitter has some weirdnesses in its HTML that make parsing the results interesting.
The problem is, Twitter also has become even more crazy and hard to understand as it has gained its millions of followers and its utility ecosystem has expanded and contracted variously. One thing that's always interested me though is whether there's a way to calculate a numeric value for given Twitter users based on both their visibility and engagement. How do you measure those? Visibility could be calculated simply by looking at how many followers someone has, but most Twitter users follow lots of random people, so that they can have lots of followers.
This behavior is based on what Dr Robert Cialdini calls the Principle of Reciprocity in his brilliant book Influence, wherein he observes that if someone does something for you, you feel an inherent obligation to return the favor. Think Hare Krishnas at the airport giving you a flower before they ask for a donation. Think of the self-appointed pundits and gurus telling you their rules of netiquette, or of your own reactions—"if this person's following me on Twitter, I should follow them back. It's only polite, after all. One way to differentiate these different types of Twitter users, therefore, is to calculate the ratio of followers to following.
That's half the calculation. Engagement is trickier to calculate, but if you examine someone's Twitter stream, you can separate out broadcast messages from those that are either an at-reply as in " DaveTaylor nice column! If the majority of tweets from someone are broadcast tweets, their level of engagement is low, whereas a Twitter user whose messages almost always are responses is high on the engagement scale. How many followers does someone have overall? How many tweets has the user sent?
An account with a high engagement but only seven tweets in the last six months is less interesting than one with lower engagement but an average of 20 tweets a day. So, how do we calculate these sorts of figures? Understanding a Twitter Profile Page Twitter offers up quite a bit of information for its public profiles and just about every Twitter profile is public , including the key stats we want to start with: We can just use curl from the command line: The "list" figure suggests popularity too, but since most Twitter users I know eschew lists, let's just ignore that for now.
We'd also like to grab the raw tweet count to see if it's an account that actually has sent some tweets or is dormant. Meanwhile, it forces us to tweak our regular expression: The challenge really is just to strip away WWW. My first attempt is this: We want "hello" as the result, because we don't want to lose the non-HTML values. Here's my second try: To strip all the HTML, simply make it a global search and replace by appending a "g" to the sed statement: Now we can turn the mess of results into something hopefully a bit more useful: Let's use this instead: The results are ready to be parsed: The wrinkle, however, is that when we drop this into a shell script, the results are a bit surprising if we look at my FilmBuzz movie news Twitter profile.
First, the script snippet: I run the same tstats script against DaveTaylor and look what happens: Let's stop here with this small dilemma. My next article will pick up the parsing challenge and then proceed to calculating some numeric scores for Twitter users. Configure with the motherboard of your choice; build to spec. Just quiet, reliable performance.
Full featured; no compromises. All products and company names listed are trademarks or trade names of their respective companies. It just takes your trusty command line and a few command-line tools to read Linux Journal. Even though Linux Journal no longer publishes on paper, you still can read it with Web browsers, PDF software, e-book readers and cell phones.
I don't have an e-book reader myself, but I think you could make the argument that the one true way to read Linux Journal is from the command line. After all, I read my e-mail, chat, check Twitter, do most of my day job and write my articles from the command line okay, it's true I use gvim too; it frees up a terminal window , so why not read Linux Journal from the place where I spend most of my time? In many ways, I feel sorry for people stuck with proprietary operating systems.
Uhen something goes wrong or if they have a problem to solve, the solution either is obvious, requires buying special software or is impossible. The Problem Recently, I ran into an interesting challenge when I had to decommission an old server. The server had quite a bit of sensitive data on it, so I also had to erase everything on the machine securely.
Finally, when I was done completely wiping away all traces of data, I had to power off the machine. COM simple request when the server is under your desk: The first program I use for this is the aptly named pdftotext. The most basic way to execute pdftotext is the following: The downside is that it doesn't know to strip out all the extraneous text, headers, pull-quotes and other text you will find in a magazine article, so the result is a bit limited, as you can see in Figure 1.
Text Plus Columns So although I suppose pdftotext's default output is readable, it's less than ideal. In many ways, I feel sorry for people simple request when the server is under stuck with proprietary operating systems. RAID 5, or more often, how to somehow repair a system I had horribly broken. At this point, some of you might be The Problem asking: The server had quite a bit of sensitive data on it, so I also had to 1. You have broken hardware. This erase everything on the machine securely.
This is a relatively problem where physical hardware I , Figure 2. Among its command-line options, it provides a -layout argument that attempts to preserve the original text layout. It's still not perfect, as you can see in Figure 2, but if you size your terminal so that it can fit a full page, it is rather readable.
Text Plus Images There is a bit of a problem, you'll find, if you do read Linux Journal in text-only mode: Although some articles still are educational in pure text, with others, it really helps to see a diagram, screenshot or some other graphical representation of what the writer is saying. You aren't without options, but this next choice is a bit of a hack. Because there are versions of the w3m command-line Web browser that can display images in a terminal the w3m-img package on a Debian-based system provides it , what you can do is convert the PDF to HTML and then view the HTML with w3m.
To do this, you use the pdftohtml program that came with the same package that provided pdftotext. This program creates a lot of files, so I recommend creating a new directory for your issue and cd-ing to it before you run the command. Here's an example of the steps to convert the September issue: In many ways, I feel sorry for people simple request when the server is under stuck with proprietary operating systems, your desk: A More Negative Version of Me Once the command completes, you can run the w3m command against the lj Now, by default, this output is much like the original output of pdftotext.
There is no attempt to preserve formatting, so the output can be a bit of a mess to read. Also, as you can see in Figure 3, my headshot looks like a photo negative. Text Plus Images Plus Columns Although it's nice to see the images in a terminal, it would be better if everything was arranged so it made a bit more sense. Like with pdftotext, pdftohtml has an option that attempts to preserve the layout. In the case of pdftohtml, you add the -c option: On the downside, it Figure 4.
As you scroll down the page, you still can read a good deal of the text, but it stands independent of the image. On the plus side, it no longer shows a negative headshot. Go with the Reflow So PDF conversion technically worked, but there definitely was room for improvement. As I thought about it, I realized that epub files work really well when it comes to reflowing text on a small screen.
I figured that might be a better source file for my command-line output. The tool I found best-suited to the job of converting epub files to text is Calibre. In my case, I just had to install a package of the same name, and I was provided with a suite of epub tools, including ebook-convert. Like with pdftotext, all you need to do is specify the input file and output file, and ebook-convert generates the output file in the format you want based on the file extension.
To create a basic text file, I would just type: In many ways, I feel sorry for people stuck with proprietary operati ng systems, Uhen something goes wrong or if they have a problem to solve, the solution either is obv ious, requires buying special software or is impossible. The Problem , Figure 5. That said, I would say that so far, it was the most readable of the output, as you can see in Figure 5. So, with all of those different ways to read Linux Journal from the command line, two methods stand out to me right now.
If you don't need images, I think the epub-to-text conversion works the best, with the pdftotext that preserves layout coming in second. If you do need to see images though, it seems like your main choice either is to convert from PDF to HTML and then use w3m, or just use w3m to browse the Linux Journal archives directly.
B Kyle Rankin is a Sr. Fluendo says its products are utilized by most OEMs for devices like desktops, laptops, thin clients, tablets, digital signage, set-top boxes, connected TV and media centers. Also, Fluendo solutions have been used by companies who've adopted Linux internally, allowing them to provide their employees with complete multimedia capabilities while remaining in full compliance with laws and patents. Many corporations have had to learn the hard way that they can have the slickest security software money can buy, while the greatest threats lurk from within.
The main focus of the book, published by Apress, is to show how firms can remove these internal weaknesses by applying the concept of least privilege. The authors point out the implications of allowing users to run with unlimited administrator rights and discuss the implications when using Microsoft's Group Policy, UNIX and Linux servers, databases and other apps. Other topics include virtual environments, compliance and a cost-benefit analysis of least privilege. Auditors, geeks and suits will all find useful information in this book.
A Futurist's Manifesto O'Reilly Media Linux Journal's own complete migration from the Gutenberg realm to the digital one is a fine case study on the powerful trends in publishing that are explored in a novel new project titled Book: The core content of this O'Reilly project is a collection of essays from thought leaders and practitioners on the developments occurring in the wake of the digital publishing shake-up brought on by the Kindle, iPhone and their kindred devices. The essays explore the new tools that are rapidly transforming how content is created, managed and distributed; the critical role that metadata plays in making book content discoverable in an era of abundance; the publishing projects that are at the bleeding edge of this digital revolution and how some digital books can evolve moment to moment, based on reader feedback.
This particular project will do just that, incorporating reader feedback as the book is produced in hybrid digital-print format and determining in what ways the project will develop. QueCloud's core technology is Queplix's flagship Virtual Data Manager architecture, which enables the configuration of a series of intelligent application software blades that identify and extract key data and associated security information from many different target applications. The blades identify and extract key metadata and associated security information from the data stored within these applications, then bring it into the Queplix Engine to support data integration with other applications.
Deduplication delivers effective storage that is many times the capacity of internal drives. The company also touts new features, such as RAID-6, faster network connectivity, fully integrated and optimized Arkeia Network Backup v9 software, integrated bare-metal disaster recovery and support for VMware vSphere virtual environments. Target customers for the appliances are mid-sized companies or remote offices. Key new features include new networking capabilities, such as "networking as a service" and unified authentication across all projects.
Diablo extends existing API support and accelerates networking and scalability features, allowing enterprises and service providers to deploy and manage OpenStack clouds with larger performance standards and with ease. Two new projects for the next OpenStack software release, "Essex", have been initiated, currently code-named Quantum and Keystone. The new eyeOS Professional Edition takes the original edition to a new level, enabling private clouds that are accessible via any browser or device.
All employees' and customers' workspaces can be virtualized in the cloud. The company says that the solution is highly scalable, easy to manage, does not require a large investment and includes all types of applications, including virtualized ones. Because eyeOS is developed in PHP and JavaScript and compiled in Hip-Hop, the system is not only high-performance, but also no software needs to be installed on the computer in order to work with it. Furthermore, the company says that eyeOS can integrate existing SaaS, virtualized legacy applications and other in-house apps served up as Web services.
The breakthrough feature in this release relates to the new program-analysis algorithms that identify data races and other serious concurrency defects. The process involves symbolic execution techniques to reason about many possible execution paths and interleavings simultaneously. The concurrency analysis can be applied to multithreaded software written for both single-core and multicore architectures.
Table of contents for issues of Linux Journal
Warnings can be generated automatically when metrics are outside an expected range. Submissions are edited for length and content. First, it's starting with a hardware and software combo— something I've not done before. Given the mostly hardware-based information for the project, here are some carefully selected bits of information from the Web site: Arduino is an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software.
It's intended for artists, designers, hobbyists and anyone interested n creating nteractive objects or environments. Arduino can sense the environment by receiving input from a variety of sensors and can affect its surroundings by controlling lights, motors and other actuators. The microcontroller on the board is programmed using the Arduino programming language and the Arduino development environment. Arduino projects can be standalone or they can communicate with software running on a computer. The environment is written in Java and based on Processing, avr-gcc and other open-source software.
Installation For those who are happy with a binary, the Web site makes things very easy with and bit binary tarballs at the download page, and if you're lucky, the Arduino IDE may even be in your repository. To run the program, enter the command: Nevertheless, it does recommend a series of packages that should help troubleshoot mishaps with both the source and binary tarbalIs. The Web site says you need the following: If you have a local repository version installed, chances are the program can be started with this command: However, I must stop you here before actually running the program, and I apologize if I led you astray in the last few paragraphs don't worry if you've already started it, you can close and re-open it with no worries.
Obviously, before you can do anything with an Arduino board and the software, you first have to plug in your Arduino device. This will help in the configuration of your hardware, especially if you're using a USB connection. Once that's out of the way, you now can start the program with any of the methods above. Usage With the program running and the device plugged in, let's set it up. Inside the main window, click on the Tools menu and navigate your way to the Board menu.
From there, choose your Arduino device I had the Arduino Uno. The IDE makes things simple with a series of examples in an easy-access menu. I recommend starting with Blink under the 1. If all goes well, you should see your device start blinking from an LED, perhaps with a board reset in the process. If your board has an enabled reset facility like the Uno I was using, you should be able to make code changes by uploading them, watching the board next to you reset and start again with the new program.
In fact, I recommend you try it now. Change one of the lines, perhaps one of the lines dealing with the delay time, and then upload it again. Now this may seem lame, but to a hardware "nOOb" like myself, changing around the program and updating the running hardware in a visible way was quite a buzz! If you want to check out your code before uploading it, the start and stop buttons are for verifying the code, with the stop button obviously allowing you to cancel any compiling partway through.
Einstein Toolkit Featured in Linux Journal
Although I'm running out of space for the software side, I recommend checking out more of the examples in the code, where genuinely real-world uses are available. Some highlights include ChatServer, "a simple server that distributes any incoming messages to all connected clients"; a reader for barometric pressure sensors; and a program for demonstrating and controlling sprite animations. However, I've been neglecting one of Arduino's real bonuses, and that is the ability to use a board to program any number of chips, remove them from the main Arduino board, and use them to run external devices.
The nature of open hardware really makes this a robotic enthusiast's wet dream, with examples like my close mate Phil's robotic spider showing some of the cool things you can achieve with this suite. According to the Web site: This enables software lighting controllers to communicate with hardware either via Ethernet or traditional DMX networks. The Web site contains documentation on the APIs and there are code examples provided in the repo. Like the name suggests, this allows lighting devices to be remotely configured, and information like temperature, fan speeds, power consumption and so on to be fed back to the lighting controller.
With the addition of a temperature sensor, it can report this information back to OLA. Installation Before I continue, I must forewarn you of a deviation from the official documentation. The official docs recommend you use a program called QLC; however, I simply couldn't satisfy the library dependencies. Because of the library issues, I had to stray from the recommended applications in the Linux how-to and use some recommendations, amusingly, from a Windows how-to—a how-to that consisted of using VMware to run Linux under Windows perhaps one of the more bizarre troubleshooting methods I've employed.
And, may I also give a big thanks to the authors of this guide, most of which will be guiding the following process. Anyway, this guide still uses Linux in the end, and it gave the following helpful command for installing the needed libraries: Depending on how your distro works, you may need to run Idconfig manually at this point note, this requires root privileges to work properly. From here, let's download the source code with git and compile it. To save time and space, I combine these steps into one stream of commands.
From your console, enter the following: If your distro uses sudo: Usage OLA uses a daemon that various programs then can interact with, the easiest of which is a localized Web interface. To get started, most people should be able to get away with using this fairly simple command: Again, going with the best-case scenario, and what most users will be getting away with, let's now look at the Web interface for controlling the Arduino RGB Mixer.
In your favored Web browser, enter "localhost: To start interacting with your Arduino device, click the Add Universe button. Check its box, and assign a number and name of your choosing for the Universe Id and Universe Name fields, respectively. And presto, you now can play with your device. Click on the Console tab, and you'll see the vertical sliders.
These turn the brightness of individual LEDs up and down in real time. Again, perhaps it's a bit lame, but it gave me the same thrill of interaction I experienced with the Arduino examples. Although this obviously is a modest example of OLA's potential, if you look at the screenshots, there are impressive examples of using OLA on a much larger and dramatic scale. Hopefully, this is one of those projects that gains real development and maturity, becoming an underground hero. For a personal wish fulfillment, I'd like to see concert lighting become significantly cheaper, and if smaller independent bands can use this in particular to realize their artistic visions, OLA will make me a happy man.
B John Knight is a year-old. He usually can be found playing a kick-drum far too much. Send e-mail to newprojects linuxjournal. This year holds a few surprises, a couple dominant players and as much open source as you can handle. We don't encourage gambling here at Linux Journal, but if you had an office pool going for pizza money, it's officially too late to make your wager. Debian apt, apt and more apt this year in the distribution category. Although it's no surprise that Ubuntu remains king of the distros, it's nice to see Debian, the "father" of Ubuntu gaining some ground.
Whether it's because Linux Mint is making Debian more user-friendly or because folks are drawn to the appeal of Debian's stability, it got just about half the votes of all the Ubuntu variants combined. Way to go Debian I Oh, and of course, congratulations to the winner and still-champion, Ubuntu.
Android and Debian tie Although Ubuntu is streamlining its versions and making the desktop screen function similarly to the Netbook screen, Ubuntu Netbook Remix still garnered the most votes this year. Will the push to Unity make next year's Readers' Choice look a little different? Our runner-up last year was Android, and this year, Android is still our runner-up, but it shares the silver medal with Debian.
Why is Debian getting so much attention this year? For the same reason soda-pop companies are releasing "throw-back" versions of their drinks with real sugar, sometimes the tried-and-true operating systems just taste a little sweeter. MeeGo takes enough of the remaining vote to get our runner-up spot, but it's a bitter prize, as MeeGo's future looks pretty bleak. Will Android get another open-source competitor? Will the lack of open competition stifle Android innovation? Only time will tell. For current Linux-based handsets, however, Google's truly can say, "All your base are belong to us.
KDE Last year it was a tie. Due to the timing of the GNOME 3 release, it's hard to tell if the victory is because of version 3 or in spite of it. The next-closest desktop environment is XFCE with less than one-third the votes of either of the big two. With such big contenders for first and second, however, that third-place spot is significant, and XFCE is gaining ground.
We think it's one to keep an eye on next year. Best Web Browser Firefox Runner-up: The Firefox team stepped up its game, however, and this year made several major revisions. As more and more extensions are being ported to Google's browser, Firefox has some real competition. Hopefully, that competition will inspire greatness from both teams.
As users, we can only benefit! Best E-mail Client Thunderbird Runner-up: Gmail Web Client Like its foxy-browser sibling, Thunderbird takes top spot again this year in the e-mail category. Now that Canonical has adopted Thunderbird as its default e-mail client in Ubuntu, we see the popularity rising for our blue-birdie friend. Still hanging on tightly to second place is Gmail. Is Gmail an app? We're not sure, but it certainly does get votes as the best e-mail client.
Because Thunderbird can access Gmail servers easily, it's possible this category blurs the line a bit, as users simply can use both. When it comes to picking a favorite though, Thunderbird is the clear victor with more than twice as many votes as our runner-up. Although its video chat is hard to beat, we think Skype lost some points due to its purchase by Microsoft. What does that new ownership mean for the future of Skype? No one knows for sure, but it has Linux users scrambling to find alternative video chat clients "just in case".
It's nice to see Pidgin take first place again as favorite IRC app.
Upcoming Events
As my geek-fu has matured, so has my chatting preference, however, and I skipped right over our second-place IRC client X-Chat. Although my preferences seldom represent those of the masses, I'll be shaking my IRSSI pom-poms next year for the awesome underdog. Credit where credit is due, however; it's hard to beat the flexibility of Pidgin and the huge feature set of X-Chat. It's clear why they are the Readers' Choice victors. Our top two microblogging clients from last year retain their status as class favorites. The ever-popular AIR application TweetDeck is right on their heels, however, and it's throwing in its cross-platform flexibility to make the contest interesting.
Who will win next year? We'll be sure to tweet the answer when the time comes. Best Microblogging Client Gwibber Runner-up: Yes, technically the newcomer LibreOffice stomped on the former-champion OpenOffice. Because LibreOffice is a fork of OpenOffice. The good news for users is that LibreOffice has a large dev crew, and updates and feature enhancements are coming out at a really nice rate. The king is dead; long live the king! Basically, we're considering the winner, "the program that saves to.
ODT by default", and we think that covers it. In fact, AbiWord is what this article is being typed on as we speak. Will the LibreOffice takeover change the favorite app category in the future? Based on voting this year, we guess not. It's nice to see our underdog-favorite AbiWord continue to get votes though.
Picasa The past few years have been an epic battle between these two programs. This year, we think the contestants might be a little tired, because although they still are clearly the top two choices, their popularity pretty much has leveled off. Whichever you choose, it's bound to be better than my solution: Inkscape GIMP kicks butt and takes names, as it scores two-thirds of the total votes this year.
Inkscape remains in second place, but it's a very distant second. There certainly are other options available, but time and time again, we turn to GIMP for editing those photos. And like Linux, the reward is great. Best Audio Tool Audacity Runner-up: Ardour is down another couple percentage points this year, but we still gave it runner-up status. We don't want Audacity to get too prideful after all.
Best Audio Player Amarok Runner-up: VLC It's still clear readers love Amarok. In a surprisingly close second place this year is VLC. And last year's runner-up Rhythmbox? Sadly, it's far in the distance behind these two front-runners. VLC plays just about any sort of video you throw at it. I think if you shove a paper flip book into your floppy drive, VLC will animate it on-screen for you. VLC takes such a huge margin this year, we almost didn't include MPlayer as a runner-up.
VLC is the favorite, without question. Chrome Bookmarks Our bookmark sync category completely rewrote history and gave us two brand-new winners. Firefox Sync, now built in to the browser, takes the victory handily with twice the votes of the runner-up. This split makes absolute sense, because Firefox beat Chrome in the browser war by the same margin.
In fact, if these numbers were different, it would cause our highly scientific voting process to look suspect. Feel free to call me Captain Obvious. Wikis We didn't title this "Most Popular Collaboration Tool", because as painful as it is, the majority of on-line collaboration tends to be e-mail messages with subjects like "RE: Final2" and ugly multi-fonted.
Although popular, that's definitely not ideal. Google Docs takes the spoils of war again this year with its ever-improving feature set for on-line collaboration. It's even possible to watch as someone else edits a document. For everything else, wikis are still popular. Easy to edit and easy to maintain, wikis are a godsend for living documents.
Ubuntu One When it comes to cloud storage, it's hard to beat Dropbox. Although security is an often-touted concern with the cloud- storage behemoth, ease of use trumps those concerns. Ubuntu One is a distant second to the cross-platform, simple-to-use Dropbox. I'd put my Dropbox referral code here to get some free space, but I suspect our editor would frown on such a thing, plus you'd all likely flog me.
Whether you're a kid of 5 or 50, it's hard not to smile when creating paintings with Bill's user-friendly application. GCompris is no slouch in the kid-friendly category either and quite nicely takes second place with its educational focus and lively graphics. Best Game World of Goo Runner-up: Battle for Wesnoth For the first time in the history of histories, Frozen Bubble is not the most popular game!
In fact, Frozen Bubble didn't even take second this year, as it lost to Battle for Wesnoth by half of a percentage point. Normally we'd consider that a tie, but Battle for Wesnoth deserves recognition for bumping off the Bubble. World of Goo is a game similar in addictiveness to Frozen Bubble, but with better graphics and more modern gameplay. If you're a casual gamer, check out World of Goo, it's really Goo-ood. Although I wouldn't argue a minotaur would be a wonderful monitoring solution for many circumstances, when it comes to computer hardware, Nagios is a little better, and far more popular.
OpenNMS is a newcomer to our victory circle, and although it's far behind Nagios, it still scored quite well. Minotaur", as it were, got very few votes. PostgreSQL It may not be the most-exciting topic around, but databases make the world go round. MySQL with its dolphin mascot takes first place again this year, with more than twice as many votes as its closest competition, PostgreSQL.
Best Backup Solution rsync Runner-up: For two years running, we don't need no stinkin' GUI! VirtualBox is more than four times as popular as the distant runner-up, VMware. VirtualBox beat virtually every other option hands down. Subversion The Linus-Torvalds-created Git remains in the number one spot this year, as it widens the gap a bit more from the runner-up, Subversion. Either will do the job, but Subversion is becoming more and more the underdog. Perhaps having Linus on your side is an advantage in the Open Source world!
OpenQRM In another repeat performance from last year. Puppet takes top spot for configuration management. If you administer greater than zero servers, you will benefit from using a tool like Puppet. If you're managing fewer than zero servers, well, we're not sure what that means. You probably need Puppet to manage your counting skills. Whatever your reason, configuration management is a hot topic, and Puppet is the hottest.
It's quite obvious, however, that our readers don't suffer from ophidiophobia in the least— hiss. Best Scripting Language Python Runner-up: Bash It hardly seems fair that Python gets both best programming and best scripting language, but I suppose excellence knows no bounds. A newcomer to our runner-up circle is Bash, the only language I can program with at all. Hats off to Python though, as it takes both categories again this year. Seeing vim in the copilot seat, however, was a nostalgic treat for me. Eclipse is incredibly extensible and remarkably quick in most environments. There is no denying it, we're geeks.
Java and all Java- related trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. Eclipse is a trademark of the Eclipse Foundation, Inc. Adobe AIR gets an honorable mention, but only on principle. It was an entire order of magnitude less popular than HTML5. Best Package Management Application apt Runner-up: Synaptic It's no surprise that with Ubuntu and Debian in the top spots for distributions, apt would win handily for package management.
Synaptic is a far-off second place, with dozens of others taking up the rear. But, our favorite response for this topic wasconfigure; make; make install. Drupal Our Webmistress, Katherine Druckman, is a die-hard Drupal fan—and for good reason, the entire Linux Journal site uses it and has for many years. Perhaps just to prove she didn't rig the voting, WordPress takes top spot again this year by a fairly narrow margin to Drupal. The great thing about open source is that it's hard to lose whichever route you take.
Because their single-digit "victories" seemed a bit strange to celebrate, we gave them runner-up status to all the other options you sent in.
School Journal, Level 4 October 2011
Feel free to cry foul; it just seemed like the logical thing to do. ASUS Dell still grabs the top spot here, similar to last year. In a second-place upset, however, ASUS grabs the silver medal, and Lenovo last year's runner-up didn't even make the chart. This category is becoming less and less important every year, only because Linux is working on more and more laptops out of the box. We think that's a great problem to have. Best Linux Desktop Workstation Vendor Dell Dell is on everyone's love-letter list this year and took the desktop workstation category by storm.
In fact, the competition was so lopsided, we can't even declare a runner-up. Dell gets all the penguin love. Dell Not to be outdone, IBM pulls through with a very narrow victory over the ever-popular Dell in our server category. When it comes to server racks, our readers trust Big Blue over anyone else but just barely.
In fact, it took twice as many votes as the number two favorite, Just for Fun. We mentioned Just for Fun in last year's Readers' Choice awards, and apparently many of you took the hint and bought it. The second spot goes to an equally awesome individual, Dave Taylor. I know I'm biased, but picking a favorite Linux Journal column is like picking a favorite flavor of ice cream—it's hard to go wrong! No one likes the proprietary drivers, but we're all thankful to have working software and accelerated video.
Samsung This year, we tweaked the poll a little bit, and instead of looking for a specific model of smartphone, we asked for your favorite manufacturer. HTC edged out the competition, making it your favorite Linux smartphone manufacturer—at least for now. ASUS In a newly created category, Samsung takes a clear and dominant victory over the other companies with its Galaxy line of tablets.
This field is still very young, so it's hard to say what next year will bring, but for this year, nothing can touch Samsung. Its unique design allows a tablet computer to become an Android-powered laptop simply by clicking the tablet into the keyboard accessory. This will be an exciting category in the future, as competition is really starting to heat up.
However you slice it, the Kindle wins this year. And if you need directions to the store in order to buy a Kindle? Because the fork is technically a new project, your write-in votes were counted, and LibreOffice wins the coveted spot as best new open- source project. GNOME 3 represents a drastic change in the way we compute on the desktop, and like its relative Unity, it has some people shaking their heads in frustration. I'll be playing World of Goo.
I hear it's rather good. And finally, a big thanks to everyone for participating in the voting. If you have ideas for new categories you'd like us to include for Readers' Choice , send e-mail to ljeditor linuxjournal. At best, they are distracting.
Full text of "Linux Journal 12"
This article discusses the best practices for designing systems that will keep your systems up and stay quiet when nothing is wrong. I've come to realize there is a single thing everyone can agree on: Fortunately, system administrators plan for these things. Whether it's a redundant server in the data center or a second availability zone in EC2, the first and best way to ensure uptime is to decrease the number of single points of failure across the network. There are drawbacks to this approach though. Increasing a Web cluster from one to ten boxes decreases the chance of hardware failure taking down the entire site by a factor of ten.
Although this increases redundancy, it also dramatically increases the expense and complexity of the network. Instead of running a single server, there's now a series of boxes with a shared data store and load balancers. This complexity comes with drawbacks. It's ten times as likely that hardware failure will occur and a system administrator will wake up, and that only counts the actual Web servers. Whether you're in a data center or in the cloud, this kind of layering of services significantly increases the chances that a single device will go down and alert in the middle of the night.
Waking up in the middle of the night to fix a server or piece of software is bad for productivity and bad for morale. You can take two steps to help make sure this doesn't happen. The first is to implement the necessary amount of redundancy without increasing the complexity of the system past what is required for it to run.
The second step is to implement a monitoring system that will allow you to monitor exactly what you want as opposed to worrying about which individual box is using how much RAM. The End of the World methodology is a thought experiment designed to help choose the level of redundancy and complexity required for the application. It helps determine acceptable scenarios for downtime. Often when you ask people when it's acceptable for their sites to be down, they'll say that it never is, but that's not exactly true.
If an asteroid strikes Earth and destroys most of the human race, is it necessary for the site to stay up? That kind of uptime requires massive infrastructure placed in strategic locations around the globe and the kind of capital investments and staffing to which only large governments usually have access. Backing off step by step from this kind of over-the-top disaster, you can find where the acceptable level is. What if the disaster is localized to just the continent?
Is it acceptable to be down at this time? If the site is focused on those customers, it may be. If the site is an international tool, such as Amazon or Google, possibly not. What if it's local to the data center or availability zone where your boxes are kept? Most shops would like to stay up even if a backhoe cuts the power to their data center.
When the problem is framed this way, it becomes obvious that there is an acceptable level of downtime. Build to your specification. Finding the outer bounds of these requirements will uncover the requirements for monitoring the service as a whole.
Notice that this is a service and not a server. Although it's easy to monitor whether a network interface is available, it's far more interesting to monitor the health of an entire cluster. If the entire Web service goes down, that's something that needs to be acted upon immediately. A monitoring system is basically a scheduler and data collection tool that executes checks against a service and reports the results back to be presented on a common dashboard. It seems like one of those innocuous pieces of software that just runs in background, like network graphs or log analysis, but it has a hidden ability to hurt an entire engineering department.
False positives can wake people up in the middle of the night and cause ongoing dread of going on pager duty. This results in people putting things in maintenance mode to quiet the false positives and can end up with an unnoticed failure of services. Dealing with false positives often is Try Before You Buy! Choosing what to monitor is far more important than choosing how to monitor it. They feel that sometimes spikes are precursors to crashes, so alerting on them is reasonable. The problem here is things that can cause the computer to use CPU and RAM, and most of them are within the normal bounds of an operating system.
When the system administrator checks on the box, the resource is in use, but the application is functioning without a problem. Unless there is a clear documented link between RAM over a certain level and a crashing service, skipping on alerts for this kind of resource use leads to far fewer false positives. Monitors should be tied to a defined good or bad value with respect to a particular production service.
- School Journal, Level 2 October 2011.
- Linux kernel?
- Um Livrinho Sobre Você (Portuguese Edition).
- Cactus Code — Einstein Toolkit Featured in Linux Journal!
- Navigation menu.
- Table of Contents, Linux Journal #, October .
Another path that leads to a large number of false positives is using percentages in differently equipped boxes. On sites with heavy traffic or sites with a lot of instrumentation in the code, 6G can go pretty quickly. Applying this monitor to the same Web server with a 2TB disk seems like less of an emergency. Thousands of WhisperStations delivered. If the average disk use for a day of work for a particular box is 5G, monitoring for 1 5G left and only allowing alerts for it during business hours will give three days notice.
Alerts this far ahead of time let the system administrator plan downtime for the system if it is required, so that the server can be maintained without taking the supported service down. The two most popular open-source monitoring systems are Zenoss and Nagios. Both of these systems offer similar monitoring capabilities. Nagios provides a larger community and lighter install than Zenoss that allows administrators to use their own graphing solutions without duplicating software.
The best part is that they have a common format for monitoring scripts—the processes that do the actual checking of services. Although both systems come with basic templates for monitoring HTTP ports with similarly popular services, much of the power of these systems comes from the ability to write custom scripts. This is a great way to check not only that a Web server is up, but also that the application itself is working. The code then looks for the result from the Hudson server and checks for success. The return value and exit code are how the monitoring script replies to the monitoring system.
On Zenoss, this is also used in deduplication. On success, the monitoring script has an exit code of 0 with a string returned in a special form for the system to process see code. Using this structure, system administrators can work with developers to build custom URLs that the monitoring system can access to determine the health of the application without worrying about every system in the set. It may seem hard to swallow that it's acceptable to leave a box down overnight.
It may be the first in a cascading series of failures that cause multiple servers to go down, eventually resulting in a downed service, but this can be addressed directly from the load balancer or front-end appliance instead of indirectly looking at the boxes themselves. Using this method, the alert can be set to go off after a certain number of boxes fail at certain times of day, and there is no need to solve harder problems, such as requiring each box to know the state of the entire cluster.
So far, the design for the systems has been fairly agnostic as far as geographies and cloud footprint. For most applications, this doesn't make a lot of difference. Usually, with multiple geographies, each data center has its own instance of the monitoring system with each one monitoring its siblings in the other locations. Operating in the cloud offers greater flexibility. Although it still is necessary to monitor the monitoring system, this can be done easily using Amazon's great, but far less configurable system to monitor Nagios or Zenoss EC2 instances.
What really stands out about Amazon's cloud is that it's elastic. Hooking up the EC2 command-line programs to the monitoring service will allow new boxes to be launched if some are experiencing problems due to resource starvation, load or programs crashing on the box. Of course, this needs to be kept in check, or the number of instances could spiral out of control, but within reasonable bounds, launching new instances in place of crashing or overloaded ones from inside of a monitoring script is relatively easy.
Here is an example of a script that monitors the load of a Hadoop cluster and adds more boxes as the number of jobs running increases: The big difference here is that you're not just monitoring a problem and passing it off to a system administrator to act on it. This script acts as an orchestrator, attempting to fix the problem it sees. Although care should be taken to place proper bounds on the way this works, and the computer should not be able to run amuck on the network, this kind of intelligent scheduler can be a powerful tool in automating tasks.
Although the idea of setting up a new monitoring system from scratch with great alerting rules and intelligent orchestration is a great idea, it's often just not possible. Most organizations have a monitoring system in place already, and often it's full of old alerts and boxes that have been placed in maintenance mode because they're more noisy than broken. If this is the case, it's time to cut out the cruft. Delete all the current alerts and take everything out of maintenance mode that isn't actually undergoing maintenance. Take the top ten noisy and badly behaved devices, and either stop monitoring the items that are provoking false positives or rewrite the scripts so they provide more meaningful data.
When these first ten are under control, move to the next group. It may take a few iterations over a few days, but in the end, you'll care more about the messages coming from what could be a very powerful tool for you. Monitoring systems often are overlooked as a required annoyance, but with a little bit of effort, they can be made to work for you. Monitoring for services, looking at clustered applications and alerting only on actual errors that can be handled provide real metrics to use for capacity planning and let system administrators sleep through the night so that they can be more proactive from day to day.
B Michael Nugent has spent a good deal of his time designing large-scale solutions to fit into a tiny budget and leveraging Linux to fulfill the roles that typically would be filled by large commercial appliances. Michael has been working to design map-reduce clusters and elastic cloud systems for growing startups in the Silicon Valley area. When not building systems, he likes sailing, cooking and making things out of other things. Michael can be reached at michael michaelnugent. The new application paradigm offers tremendous power - but challenges established security, risk, and compliance practices.
Yesterday's solutions can't meet today's IT reality. Cloud computing, mobile apps, always-on connectivity, and social media force security professionals to develop new, more comprehensive solutions. Providing effective, unobtrusive security is the true modern day IT objective. Security Threats presents the best practices for tomorrow's security environment.
At this forum, leading-edge IT and security experts will discuss how they simultaneously protect and empower their businesses. Sponsorship and Exhibiting Opportunities If you are interested in sponsoring, speaking or exhibiting at this event, please call or email info opalevents. I don't like using GUI aka non-command-line or graphical tools with my databases. This is likely because when I first learned it was with command-line tools, but even so, I think command-line database tools often are the best way to interact with a database manually. Clients connect to the server, and although client and server often are installed together and you may think of them as a single entity, they actually are not.
If you ever need server. The MariaDB server is called mysqld, and it always is running while the server is up. Likewise, the PostgreSQL server is called postgres. There is just the database you are using, which is a local file, and client programs, which can interact with it. Commands end with ; or I. This is 6 Wizardborn 2 free software. I chose SQLite, because it is arguably the most popular database in the world. You probably have several SQLite databases on your local computer right now. The command-line client is nice too.
Most distributions have packages for them, and in the case of MariaDB, there are packages for Debian, Ubuntu, Red Hat and a generic Linux binary available from its download page. See the documentation for each and your distribution's documentation for instructions. On Ubuntu, you can install all three with the following: You need to have added the appropriate MariaDB Ubuntu repository for the above to work. Instructions are on the MariaDB downloads page. I've listed several useful commands for each client in Table 1.
The first entry shows the basic command used to connect to a database; however, each client has several options. You will need these often, so refer to the man pages for the clients for what they are and how to use them. Some of the commands listed in Table 1 have extended options; refer to the documentation for details. The first time you connect to a newly installed MariaDB or PostgreSQL database, you need to connect as the database superuser because you likely have not set up any other users.
To launch a freshly installed MariaDB mysql client, do the following: To launch a freshly installed PostgreSQL psql client, do the following: If you just dropped the library database, create it again. You'll need it later to follow along with the examples in this article. In SQLite3, there is no database server, and databases are just regular files, often with a. To create a database, name it on the command line when you launch the client, and if it doesn't exist, the client will create it, like so: Managing Users and Permissions There isn't space to go into the details of how to create and manage the permissions of database users here.
I will continue to use the default superuser accounts for the examples here. There is no internal database user or user permissions management with SQLite3. If local users have write access to the database file, they can do anything they want. So let's look at some of the basic SQL-related similarities and differences between the three. The most common SQL statements are selects, inserts, updates and deletes. Goodreads with 86 ratings. Linux Journal November by. Brennan, Clare 20 October Linux Journal was a monthly technology magazine published by Belltown Media, Inc.
It focused specifically on Linux, allowing the content to be a highly. Linux Journal Posted on April 7, in Code, Linux, Media. You then need to login again using your Linux Journal subscription account number which was provided to you. Sennett, Richard, , Together: The Rituals, Pleasures, and Politics of.
November web server survey. Retrieved on November 16, , from http: The Linux kernel is an open-source monolithic Unix-like computer operating system kernel On 21 July , Torvalds announced the release of Linux kernel 3. In December , Torvalds decided to reduce kernel complexity by removing support for i processors, making the Retrieved on November 16, ,. Those of you who read Cooking with Linux, the multi-award-winning column that appeared monthly in Linux Journal magazine for 10 years, likely agree.
Linux Journal, , nov Examples of using Laravel Collections in Drupal, 23 August Writing an Article for Linux Journal, 27 July Netrunner MAG, article, 25th of April, Linux Format magazine, article, August Journal of Economic Perspectives, 20 1: Einstein Toolkit Featured in Linux Journal. Linux Journal November Issue The number for Ubuntu includes all flavors, but the Epub ahead of print 5 November Linux Journal 72, April Wherever you go, Linux Journal goes with you..
Linux Journal - November Linux Journal - January Results 1 - 16 of Linux Journal September Linux Journal November Linux Journal August On the academic front,. The security system in Linux operating system it has password. The Journal of Economic Perspectives We, Linux people, generally use systemd now and one of its components is the journal controlled by the journalctl command line tool. Linux Journal October eBook: Project Sputnik to become a product - July In our Fall issue, we feature articles on the following topics: The account was created on November and has over 22k followers..
Restoring Deleted Files in Linux from the ext3 Journal. Deleting Computer Files Someone just. Fri Apr 27 Summary of the changes and new features merged in the Linux Kernel during the 2. Support reiserfs external journal. Journal of Computational Methods in Sciences and Engineering, vol.