Tuesday, May 06, 2014

Cara Sperry's invitation is awaiting your response

Cara Sperry would like to connect on LinkedIn. How would you like to respond?
Cara Sperry
Cara Sperry
Job Developer Computer Technologies Program
Confirm you know Cara
You are receiving Reminder emails for pending invitations. Unsubscribe
© 2014, LinkedIn Corporation. 2029 Stierlin Ct. Mountain View, CA 94043, USA

Tuesday, April 29, 2014

Cara Sperry's invitation is awaiting your response

Cara Sperry would like to connect on LinkedIn. How would you like to respond?
Cara Sperry
Cara Sperry
Job Developer Computer Technologies Program
Confirm you know Cara
You are receiving Reminder emails for pending invitations. Unsubscribe
© 2014, LinkedIn Corporation. 2029 Stierlin Ct. Mountain View, CA 94043, USA

Thursday, April 24, 2014

Invitation to connect on LinkedIn

Cara Sperry
From Cara Sperry
Student Services Coordinator, Instructor, Job Developer at Computer Technologies Program
San Francisco Bay Area

I'd like to add you to my professional network on LinkedIn.

- Cara

You are receiving Invitation to Connect emails. Unsubscribe
© 2014, LinkedIn Corporation. 2029 Stierlin Ct. Mountain View, CA 94043, USA

Wednesday, January 11, 2006

How Multiple Server Hosting impacts your website's uptime

by: Godfrey E. Heron

This article describes the technology behind multiple server hosting and how you may utilize it to maximize your site's security and uptime Hosting of web sites has essentially become a commodity. There is very little distinguishing one hosting company from the next. Core plans and features are the same and price is no longer a true determining feature. In fact, choosing a host based on the cheapest price can be more expensive in the long term with respect to reliability issues and possible loss of sales as a result of website downtime. Selecting a host from the thousands of providers and resellers can be a very daunting task, which may result in a hit and miss approach. But although hosting may have become a commodity, one distinguishing feature that you must always look out for is reliability. At the heart of any hosting company's reliability is redundancy. This ensures that if a problem exists at one point, there will be an alternative which ensures continuity as seemlessly and transparently as possible.

Most hosts do employ redundant network connections. These are the high speed pipes that route data from the server to your web browser. But, redundant 'multiple web servers' have been extremely rare and very expensive, requiring costly routing equipment which has previously been used only in mission critical applications of Fortune 500 companies. However, a very neat but little known Domain Name Server(DNS) feature called 'round robin' allows the selection and provision of a particular IP address from a 'pool' of addresses when a DNS request arrives. To understand what this has to do with server reliability it's important to remember that the Domain Name Server (DNS) database maps a host name to their IP address. So instead of using a hard to remember series of numbers (IP address) we just type in your web browser www.yourdomain.com, to get to your website. Now, typically it takes at at least 2 to 3 days to propagate or 'spread the word' of your DNS info throughout the internet. That's why when you register or transfer a domain name it isn't immediately available to the person browsing the web. This delay has stymied the security benefits of hosting your site on multiple servers, as your site would be down for a couple of days if something went awry with one server. You would have to change your DNS to reflect your second server and wait days before the change was picked up in routers on the internet. However, the round robin DNS strategy solves this predicament, by mapping your domain name to more than one IP address. Select hosting companies now employ the DNS round robin technique in conjunction with'failover monitoring'. The DNS round robin failover monitoring process starts by a web hosting company setting up your site on two or more independent web servers (preferably with different IP blocks assigned to them). Your domain name will therefore have 2 or more IP Addresses assigned to it. Then the failover monitor watches your web server(s) by dispatching data to a URL you specify and looking for particular text in the results. When the system detects that one of your IP addresses is returning an error, and the others aren't, it pulls that IP address out of the list. The DNS then points your domain name to the working IP address/s If any of your IP's come back online they are restored to the IP pool. This effectively and safely keeps your site online – even if one of your web servers is down. The average failure detection and recovery time with a system like this can be as low as 15 minutes. This time varies depending on the speed of your site and the nature of the failure and also how long other ISP's cache (save) your DNS information. The time taken for other ISP's caching your information can be manipulated in the failover monitor by lowering the "time to live" (TTL) cache settings. These are the settings that other ISP's will use to determine how long to cache your DNS information. Of course you must bear in mind the matter of how frequently data is synchronized between your website's servers. This will be the hosting company's responsibility, and this may become complicated where databases and user sessions are involved. The very expensive hardware based failover monitoring systems that point a virtual IP address to other ISP's, while behind the scenes juggling a number of unique IP addresses on different servers, is of course the most 'elegant' solution to multi server hosting. That way, the whole issue of ISP's caching your information does not come into play. Therefore, for site's that need to have true 99.99995% uptime, without huge outlays of money, the technology is readily available and certain proprietory failure monitoring systems are now relatively cheap to apply.

About The Author: Godfrey Heron is the Website Manager of the Irieisle Multiple Domain Hosting Services company.Signup for your free trial, and host multiple web sites on one account: http://www.irieisle-online.com

Tuesday, January 10, 2006

Buffer Underrun and Overrun Scenarios

By Stephen Bucaro

Buffer underrun and buffer overrun are occurrences that
can result in some very frustrating errors. This is not a
"how-to" article about fixing buffer underrun and buffer
overrun errors, but a basic description of what a buffer
is, why we need buffers, and what causes buffer underrun
and buffer overrun.

Buffer Underrun

The most common occurrence of buffer underrun is CD
recorders. Let's imagine an example of a CD recording
session. The computer has an ATA hard drive capable of
transferring data at a rate of 8 MBps (Mega Bytes per
second). The CD recorder has a recording rate of 8 MBps.
Everything should work fine, right?

Note: The data transfer rates mentioned in this article do
not apply to any specific device. They're just for purposes
of discussion.

The 8 MBps specification for the hard drive is for "burst"
mode. In other words, it can transfer data at a rate of
8 MBps for only a few seconds. Then the transfer rate drops
much lower, and if the hard drive hasn't been maintained,
for example it has not been defragmented recently, the
transfer rate can drop even lower.

Whereas a hard drive can skip from cluster to cluster
while reading and writing, a CD recorder must burn the data
track in a continuous stream without stopping. The design
of a CD recorder requires a "sustained" transfer rate.

When two devices that operate at different transfer rates
must communicate, we can make them work together by placing
a buffer between them. A buffer is a block of memory, like
a bucket for bytes. When you start the CD recording session,
the hard drive begins filling the buffer. When the buffer
is almost full, the CD recorder begins drawing bytes out of
the buffer.

If everything goes smoothly, the hard drive will be able
to keep enough bytes in the buffer so that the speedy CD
recorder won't empty the buffer. If the buffer runs dry,
the CD recorder has no data to burn into the CD, so it
stops. Buffer underrun error.

We can reduce the chances of buffer underrun by configuring
a larger buffer. Then the hard drive will be able to put
more bytes in the bucket before the CD recorder starts
drawing them out. However, sometimes you can't increase the
size of the buffer because the computer doesn't have a
large amount of RAM installed. When the computer needs more
RAM, it uses "virtual" RAM. That is, it allocates part of
the hard disk and pretends like that's RAM. Now, even though
you've increased the size of the buffer, you have caused
the hard drive to work even slower.

Buffer Underrun and Overrun Scenarios

Buffer Overrun

The most common occurrence of buffer overrun is video
recorders. Let's imagine an example of a video camera
connected to a computer. The video camera records at a data
rate of 168 MBps. The computer monitor is capable of
displaying data at a rate of only 60 MBps. We have a big
problem, right?

Thanks to MPEG compression, we might not have as big a
problem as first appears. With MPEG compression, the video
camera does not have to send the entire image for every
frame. It sends only the data for the part of the image
that changed, and it compresses that part.

If the image doesn't change much, and the part that changed
compresses well, the video camera might need to transfer at
a rate of only a few MBps. But if the entire image changes
every frame and the image does not compress well, the video
camera might transfer data at a higher rate than the
computer monitor is capable of displaying.

Again, we have two devices that operate at different
transfer rates that must communicate. We can make them work
together by placing a buffer between them. When you start
recording video, the video recorder starts filing the
buffer. The computer display immediately begins pulling
data out of the buffer to compose display frames.

If everything goes smoothly, the computer display will be
pulling data out of the buffer fast enough so that the
buffer never completely fills. If the buffer fills up, the
video camera can't put any more data in, so it stops.
Buffer overrun error.

We can reduce the chances of buffer overrun by defining a
larger buffer. Then the video camera will be able to put
more bytes in the bucket before it fills up. Hopefully,
the video camera will run into a few frames where the
entire image doesn't change, reducing its data transfer
rate enough so the computer display can catch up.

Underrun, Overrun Protection

Today, CD recorder buffer underrun is much less common.
Computers come with much more RAM than they did before,
and CD recorders have learned to monitor the buffer and
reduce the recording speed if the buffer starts to run low.

Video camera buffer overrun is also less common. Video uses
a program called a "codec" (for encode/decode). A smart
codec can monitor the buffer and reconfigure itself when
the buffer gets too full. It might for example automatically
reduce the color depth of the video, or drop frames, until
the computer display catches up.

Underrun and overrun Protection doesn't completely solve
the problem. If underrun protection activates, a CD
recording session will take much longer. If overrun
protection activates, the video quality will be reduced.
The only way to solve underrun and overrun problems, after
increasing the size of the buffer, is to match the data
transfer rates of the devices that need to communicate.
You can upgrade to a faster hard drive, or install to a
high performance video card.

Now, if you need to troubleshoot a buffer underrun or
buffer overrun errors, at least you know what a buffer is,
why we need buffersArticle Search, and what causes buffer underrun and
buffer overrun errors.

Resource Box:
Copyright(C)2004 Bucaro TecHelp. To learn how to maintain
your computer and use it more effectively to design a Web
site and make money on the Web visit bucarotechelp.com
To subscribe to Bucaro TecHelp Newsletter Send a blank
email to subscribe@bucarotechelp.com

Wednesday, June 15, 2005

Looking Ahead

I've made a series of changes to the IT infrastructure recently. We have acquired some new server hardware and I'm converting from Debian to CentOS 4 (Redhat EL4).

My goals are to provide a 2 server HA cluster using DRBD (to keep data in sync) and Heartbeat on CentOS. This improves the server situation in 3 ways.

1) Real time data backup to a hot spare.
2) Automatic failover to the hot spare.
3) CentOS 4 is based on Redhat EL 4, so future support is easier.

So far I have the new hardware assembled. CentOS is installed on 1 and being installed on the second as of now. I also have DRBD compiled on the primary machine. As this work is being don I am documenting everything.

I am used to being the solo IT Department, but I realize that making plans for future support is good. Maybe someday I can take a vacation even without worrying too much.

The openMosix stuff was fun to play with, but its really unnecessary in our situation. The cpu demands on our server are extremely low. There is no problem using a P3-500 to do the job. We have even run on a P2-300 during an emergency.

Eventually I would like to be able to configure a cluster with n nodes that can be seamlessly added or removed. Currently I do not know how this is done. The only way I have seen this done is with load balancing front end. To me that seems like a single point of failure situation.

Wednesday, January 12, 2005

Firefox - The World's Best Web Browser


Better (than bad), safer (than tragic), taller (yes)

This post -a work in progress

Firefox is your run of the mill web browser, not unlike MS Internet Explorer. However it comes to us from the open source community - and due to that a number of advantages. It was built on the ashes of the old Netscape browser. It has a few nice new features, runs a bit better and it is safer to use then IE.

Netscape open-sourced its Communicator source code in 1998, in an effort to "harness the power of thousands of open-source coders around the world".

One of the major problem with Microsoft "winning" the earlier browser wars, was that they essentially stopped developing their web browser. Since Firefox is an open source effort - lots and lots of people can offer their suggestions and ideas, and assist in making them happen. See the bottom of the Firefox main page for more details on the browsers features.

Extensions allows third party developers to create a huge variety of plug ins to add all sorts of new functionality. The plug ins page has an enormous list of new things that Firefox can do. From Ad Blocking to web development tools to an egg timer...

Although there seem like a lot of activity in this area, so far support for accessibility software is limited. There are some details on this here.

Beyond the above info, one thing that is nice to know is that since this is an open source project, developers will have direct access to the source code. This should make development work in these areas MUCH easier than dealing with Microsoft.

The most serious problem I see is that official Jaws support doesn't seem to exist. I did come across a Firefox extension (or at least talk about one). So it might be possible that a third party will release "Jaws support" before Freedom Scientific does.

I'll keep an eye out to see what develops in the accessibility area. Moral of the story I guess is that we need to keep IE available to people with accessibility software needs for the most part. I'll encourage Firefox use for everyone else though.