Jump to content

autonomous_unit

Advanced Member
  • Posts

    891
  • Joined

  • Last visited

Posts posted by autonomous_unit

  1. Yes, I think they are both fine CPUs and if I were buying a new system today, I would probably compare current prices for equal clock rates and equal motherboards and buy the cheaper whole system.

    As for power consumption, my only experience is the practical system consumption I get from similarly configured systems: a 2.4 GHz Core 2 Duo and a 2.2 GHz Athlon X2. Both have 1024 MB RAM, onboard NVIDIA Quadro NVS graphics, onboard SATA controllers, etc. and I have logged some power consumption metrics from my UPSs. My Intel machine, despite a higher clock rate, draws about 90% the power when idle (roughly 70 vs 80 watts), and about 60% the power at full load (125 vs 200 watts). This is running Linux with the same dynamic clock scaling services enabled. I cannot be sure how accurate these figures are as absolute watt measures, but I trust that the UPS is giving reasonable relative measures... I attached each system to the same UPS, one at a time, to get comparable values.

    Of course, if you really care about power, you'd probably start with an embedded/laptop style system. For comparison, my three year old laptop (with only one core and a slower 1.7 GHz clock) draws around 11 watts idle and 32 watts at full load (13% and 17% of the AMD system consumption)... this is taken from the internal ACPI battery monitoring, and so should be taken with a grain of salt when comparing to the above UPS measurements.

  2. It's a mistake to look at the AMD model number such as "5200" and think that means 5.2 GHz! My AMD X2 4200 has two 2.2 GHz cores and is definitely slower than my Intel Core 2 Duo machine that has two 2.4 GHz cores. In this case, both systems are using DDR2 memory.

    I have found that the AMD X2 and the Intel Core 2 Duo compare pretty well at the same clock speeds, e.g. their relative performance on my own favorite benchmarks come out pretty close to what you would expect by comparing the real clock rates. I'd say that the Intel parts perform a little better on 32 bit code than the AMD parts a the same clock speed, while the AMD parts are faster if you run in 64-bit mode. The Intel parts also consume less power than a comparable AMD part.

    As for recommending a motherboard, it really depends on your other goals and requirements. In recent years, I've given up on the idea of "ugprading" computers much. Each time I buy a new CPU, it tends to be a different socket and require a new motherboard. I usually get the ones that cost less than $100 US, with everything integrated so I just need to add RAM and disks to have a functioning system. Sometimes I can move RAM between systems and sometimes I have to upgrade it too, such as when I went from AMD socket 754 with DDR memory to socket AM2 with DDR2 memory. I am not a gamer, so I do not need a high powered graphics card...

  3. Even though those pictures were taken in daylight, the flash was used. Those blobs of light are very common with small cameras using an internal flash bulb. They are the result of the light hitting very small objects near the camera and reflecting straight back into the lens because the flash is situated so close to the lens. They are blobs because they are way outside the depth of field of the camera at that focus and aperature setting, so they are blurry circles of unfocused light on the sensor.

    Next time this happens, try manually disabling the flash and you will find that these spots go away. I've seen them happen due to small bugs in the air, very light snow (like crystals you might not even notice are falling), fog, smoke, pollen, and dust. I habitually disable the flash on my digicam unless I explicitly want its effect and think it will not ruin the shot.

    Either that, or Nikon forgot to install the Spirit Filter when they assembled your camera.

  4. I haven't had to do a 90-day report in a couple of years, due to frequent international travel. I just realized that my current stay has been longer than most, and by the time I fly out next week, I am about 20 days late for a 90-day report!

    Is this something which needs a trip to Suan Phlu to rectify? If not, is there a fine or other procedure I need to prepare for when exiting at Suvarnabhumi? I will be leaving on an early morning flight (6:50 A.M.), and I want to make sure I don't run into some problem which would make be miss my flight...

  5. is there a special 'sunblock' curtain fabric designed specifically for energy conservation purposes?...if so, do you have a product name (in english or thai)? I wanna install new curtains on a rail above our sliding upstairs windows that will have an insulating effect as well as totally block sunlight (tutsi resides in an evil, pitch black upstairs lair while downstairs in the sunlight happy thais cavort in sanook abandon...)

    We asked for curtains to block the heat, and ended up with a backing layer (detachable with velcro) that reminds me of portable movie screens from back when people did slide shows and home movies. Its texture resembles vinylized canvas and it has a silver color but it does not look like metalized mylar or anything that shiny. My wife doesn't recall any special term for the material but just something like this: ผ้าม่านสะท้อนแสง and she thinks the vendor said people use the same material to make car covers...

    It works well enough that you notice, first, how much light leaks around the edges where the curtain folds don't touch the wall, and second, how much heat pours from the ceilings and walls once you cover the windows. :o Maybe you should burrow underground for your lair, and let the fun move upstairs...

  6. What is ironic in our situation is that my wife's oldest siblings seem to think my wife owes them something for her having gone to the US to study. But, she actually paid her father back many times over (before he died) by wiring a fraction of her scholarship stipends and earnings (as a teaching/research assistant). These older siblings have essentially taken over the family assets, so they benefitted from this money she remitted to her parents in the past, even though they have squandered it now.

    Furthermore, the father didn't actually fund her studies. At most, he helped with some books or maybe a winter coat. For tuition fees, she acquired a huge debt which she can pay off or work off as a poorly-paid employee of the Thai organization that sponsored her. The worst part is that her extended family don't seem to be able to understand what a large debt like this means. She told them about it, and they somehow twisted that into an impression that she must really be rich to have such a debt, so surely she can help them with their smaller amounts. :o

    I don't envy her position. She realizes that in the end they are going to think she is stingy and she probably will never have a close relationship with any but her youngest siblings and her mother---the ones who can accept her for who she has grown into since leaving her hometown as a teen to study in Bangkok and then abroad. In some sense, she's still coming to grips with the fallout from her deciding at such a young age to go for the brass ring, make something of herself, and escape that life.

    (Oh, and in anticipation of the next question... yes my wife informed me of her scholarship debt and repatriation commitment long before we got married, and we decided it would be worth it to follow through anyway which leads to where we are now.)

  7. The point of open source or "free" software isn't that it doesn't require licensing fees. The freedom is with regard to the legal ability to do what you please with the software.

    For the specialty systems, the open source value includes the fact that you can hire support or even product enhancements from other sources, e.g. any consultant or in-house staff can legally modify the product without any special contract or consent from the original authors/vendors. This brings a level of security in planning when you base a new business on the product, because you no longer depend on the original vendor staying in business.

    For the embedded system/appliance makers, the monetarily free software is of course attractive. The vendor spends their human effort up front on development and keeps the per-unit manufacturing costs low, because they do not need to pay per-unit licensing fees to distribute the resulting product. This is another area which threatens MS, because they would like to get some per-unit licensing for everything in the market (as they've always wanted), and also putting a patent tax on the Linux units would make the MS embedded OS seem more price-competetive.

  8. I've always appreciated Thinkpads, though my latest one was bought three years ago, before IBM sold the business. On the other hand, Lenovo was making them for IBM already, so maybe the quality is still there...

    I noticed the US pricing for the newest T61 is very attractive. It is based on the newest Intel chipsets that were just talked about the other day here, and for around $1500 USD you can get Core 2 Duo, 2 GB RAM, a large drive, WXGA+ LCD (16:10 widescreen), 802.11a/b/g/n, bluetooth, and DVD writer. I am not sure what the local prices are in comparison, but it might be worth checking! They even have sub-models with the new Intel graphics or NVIDIA, depending on your preference.

    My older T41 has been very durable considering the beating its taken for the last three years. And it has about half those specs in every regard and cost me a lot more...

    I always prefer a faster 7200 RPM laptop drive, but you can get either that or 5400 RPM. The slower ones are cheaper so you can get more space for your money, if you don't care so much about the last bit of disk speed.

    If you do go with a Thinkpad, and you plan to keep it for a long time, I highly recommend getting the upgraded international warranty service. For a few hundred dollars, you can get 3 year coverage to just bring it into the service depot and have failing parts replaced. I had a motherboard replaced this way, and that would not have been cost effective to pay for after the regular 1 year warranty expired.

  9. I have someone who will sell me a little router (cheap SOHO wireless router) but it needs a new power supply... does anybody know if there are shops around where I can get an inexpensive 3.3V, 3A DC power supply, i.e. that plugs into mains current, and how much they might cost? I want to know if it is really worth my while to get this cheap router, or whether getting a new power supply will cost so much that I might as well just buy a new router!

  10. I think what is going on here is that the horse is out of the barn, and MS and other old fashioned software vendors are trying to convince themselves and the public that there is nothing beyond the barn doors.

    This is the second wave of open source software FUD, triggered by the shocking (to them) situation where real commercial success is being had by purveyors of open source software, as well as companies which are mere users of open source. Companies like Red Hat have been around long enough to prove they are no flash in the pan now. With MS's latest comments about patent violations and needing to go after users of the infringing software, I hope to see a big mess when they decide to go after their new nemisis Google for its rampant use of Linux, or maybe some of the major financial firms who run big Linux clusters for their automated trading. Hopefully it will unfold more rapidly than the silly IBM/SCO Linux trials, as that was too slow for spectator sport! :o

    The third wave might start soon, when MS really starts to panic as Linux-based mobile phones further threaten the spread of their software in that market as well.

    As someone who has been paid to produce open source for about a decade now, I am happy to see that more mainstream corporate types, venture capitalists, and lawyers are getting their heads around the open source position and seeing how to manage it like any other enterprise. The risks can be mitigated and the costs and benefits quanitifed such that real business, with all its plodding inefficiencies, can exploit it too. It doesn't require a big leap of faith like it once did.

  11. I've always used "gtkpod" to load songs on my iPod, but I don't think there are any alternatives to the iTunes online store, if you are into that (I've never used it, but I get the feeling some people live and die by it).

    Edit: I should add, people using Fedora are well served to learn about the Livna repository of additional packages to enable multimedia and other functions not included in the entirely-free Fedora builds. Also, doing a "yum info available | less" command some time you are bored can be instructive... just skim through and read some of the package descriptions for all the stuff you might not know is there. It's a very long list...

  12. The worry in many cases isn't the owner of the network but the knowledgeable and nasty guy who "owns" the malware that is likely running on many hosts in your typical commercial network. Unless the real admins have a lot of security-conscious monitoring going on, the establishment will not realize how infected their LAN has become.

    It used to be typical in US academic spaces that the first thing that got installed by an attacker was a LAN sniffer to automatically capture telnet, ftp, and rlogin passwords (the prevalent traffic among the Unix machines). The attacker wasn't necessarily all that skilled but just knew the recipe to install these tools. This is where "rootkits" had their genesis...

    Personally, I wouldn't trust anything outside my laptop as a general rule. Run a local firewall in software, and use secure protocols. I'll make an exception when I've personally configured a LAN or cross-over cable, but even then I feel dirty. :o It's the only way to stay secure, sort of like habitually using seat-belts helps make sure you're wearing it that one time you really need it. I wouldn't use ftp with passwords over the internet anywhere, unless I'd set it up with secure one-time passwords. But for all that trouble, it would be easier to set up SSH/SCP/SFTP and be done with it.

  13. It is possible that the older DVI link would have limited resolution and that the VGA output could go higher. Either the DVI link itself or the port it uses to get framebuffer data could conceivably be a limiting factor in terms of the number of pixels/second which eventually limits your practical screen size. I really don't know how to answer that except to try it out (if you aren't willing to believe the written specs).

    It looked pretty good in analog. I was actually surprised, and even the LCD sub-pixel antialiasing looked OK except for an odd artifact here or there. Particularly given that it was a new LCD it looked better than the internal screen which has a pretty worn out back light! Well, except that the 1400x1050 display is a pretty fine pitch compared to most desktop LCDs... the analog inputs for LCDs have improved a lot since the last time I tried something like this, about 5 years ago.

  14. My three year old T41 has a Radeon Mobility 9000 (M9) graphics chip with 32MB video RAM and it definitely supports 1680x1050 on the external VGA as I did that recently for a few weeks. In my case, I had the same desktop on the internal 1400x1050 display and scrolled around the larger 1680x1050 space, but I am sure that was a driver option I selected (clone mode) and separate displays was also possible. This was with Linux, but clearly the hardware supports it.

    You should check your own hardware info, but I suspect there is little difference between your T40p and my T41, unless yours had a better chipset. I don't think the graphics capabilities changed much (if at all) between T40-T42 series.

  15. What kind of instabilities are you getting?

    It's difficult to explain as it is more of a QoS issue than anything else, and I am doing a bit of guesswork. I've always had very good signal-to-noise and attenuation figured here, because our lines are recently installed. But I would see the WAN performance go up and down in fits and starts. I used to blame TOT 100% but I have started to think it is my equipment making matters worse, because it can also become very unresponsive when I try to telnet into the router or load its configuration web page (with D-Link firmware).

    With the version 3 D-Link firmware, I've noticed that the RAM usually seems nearly depleted when I telnet into it, whereas with the OpenWRT firmware there is usually 3-4MB free. I wonder whether the DSL-G624t is equipped with more RAM and that this is a problem with running the firmware on the older G604t... Unfortunately, the OpenWRT firmware is not quite finished yet, so the tradeoff for this improvement is that its ethernet driver is a little flaky. I've seen other problems I try to use all its upload bandwidth, as the ethernet driver starts logging errors and I notice TCP connection errors where other mostly idle connections are timing out while the high-bandwidth connection keeps on going.

    I've also started paying attention to our electrical power and put some APC brand UPSs into service. It is surprising how many brownouts are occuring in the evenings, e.g. the mains voltage is dropping low enough to trigger the UPS into battery mode for several seconds or even minutes at a time. It is often said that the cheap power supplies for these home routers are under-specified, so perhaps their DC output is also dipping too low if the mains voltage dips...

  16. I did a spur-of-the-moment CDMA connection using a coworker's phone in the US before, and it was nearly identical to my existing GPRS method. His phone appeared as a USB modem class device, supported by the cdc-acm module, and I just had to get the special dial string from him to put his phone into the CDMA equivalent of GPRS mode, just as I'd had to search the web for the dial string to use with my Motorola GSM phones.

    The first question is whether the OP's modem implements the standard USB modem device class or not. If so, then he just needs to dig around until he finds the special dial string to put it into the packet connection mode.

    When all of this happens, the Linux host just speaks PPP with the modem and the modem exchanges packets over the wireless interface. Any old dummy username/password should work with PPP, in case it won't connect without doing any authentication. This is the part that always takes repetition to figure out: the wireless PPP is not a conversation with a remote ISP, as it is with old fashioned analog modems.

  17. Underneath the veneer of branding, most of these devices are close to identical sample boards from a set of available hardware original equipment manufacturers. Because of that, it is difficult to make any real across the board statements by brand, e.g. you can find D-Link and Linksys models that have essentially identical circuits inside. Likewise, you can find two models from the same brand which have nothing in common but the logo. The model designation will rarely help you sort this out, as even just the little "revision number" on the label may imply an entirely different hardware platform within the same model line!

    With all the consumer junk, you pretty much have to accept the risk that it might or might not work well for you at first, and things may or may not improve with software updates before the vendor stops supporting the model. The only exception is these aftermarket firmwares mentioned previously, where open source folks have figured out how to run more modern Linux software on the devices, and you can expect improvements and bug fixes for years to come.

    I also have a D-Link DSL-G604t and have started playing with OpenWRT firmware on it, because even the above-mentioned firmware from the newer model in Italy still has some instabilities for me on my TOT ADSL line. I have had this thing for almost three years now and, and get frustated enough to investigate firmware options about once every six months. It is a trade-off though.. the wireless function seems hopeless with the open source firmware, so while my ADSL performance seems more stable now, including some QoS support, I don't have wireless at all. I have not seen any well-supported ADSL+WiFi+ethernet switch routers that can run open firmware. If I buy a new device for wireless, I am no worse off then if I bought an ADSL modem and separate wireless router, except I paid too much for my modem. :o

    I think you either need to plan ahead and buy exactly what you want for open source use, or just stick with cheap devices where you can accept the need to discard and replace more frequently.

  18. It's a long shot, but try "modprobe cdc-acm" and see if it detects the modem? That is the generic USB serial port driver which is used with most GPRS modems... if it detects, then you would try to use PPP over the newly created ACM0 serial device.

  19. Yes, this software RAID discussion was predicated on budget solutions for home storage for very technical folks, not production business use. But, I guess I would question the use of nearly any of the RAID cards in a PC, once you cross the threshold from cheap commodity stuff. Things like the Apple X-Serve chassis are just too affordable and attractive once you really need some redundancy of controllers, power supplies, and even server hosts, and you're actually going to be managing a growing disk pool.

    For geeks who want a cheap and effective RAID5 solution, and who can handle their own tech support, the Linux software RAID can be very nice. As Phil pointed out, it can be cheaper to outfit a fast enough PC to act as a network file server over gigabit ethernet, rather than bothering with storage cards which can start costing as much as the PC. This works great for "near-line" storage of movies, music, and digital photo archives where you do not need bleeding edge performance, but just cheap trustworthy storage.

    My personal experience is that software RAID5 on locally attached disks is fast enough to not care about the CPU overhead on a modern Athlon64 or Core2 Duo system. And it is more than sufficient to fill gigabit ethernet to capacity using NFSv4, without the CPU ever scaling out of its lowest clock speed on the file server.

  20. I would argue that RAID5 is no worse than RAID1 in the case of "disk errors on the remaining disk(s)". You get that, and you are screwed and should have used RAID6 or and/or spare drives. The advantage of RAID5 is no more and no less that you get to have N-1 disks' worth of usable space. And, with software RAID5, you avoid the challenge of getting expensive replacement controllers. You can literally just toss the drives into any old Linux box and have the RAID volumes auto-detected and reassembled. I have been very happy running Linux software RAID5 volumes on fileservers running Red Hat and Fedora (just what I historically used).

    Unlike some hardware RAID5, a good way to think of software RAID5 on Linux is that you are making individual RAIDed partitions and not an entire RAIDed disk. You do not usually create partitions within such a volume. You can even use different RAID levels on different slices. For example, my usual setup is to have a RAID1 set across all drives for holding /boot, and RAID5 sets for holding / and /home and so on. I configured the /boot to mirror and keep the other drives as spares, so it will automatically rebuild the mirror on a spare if one of the main mirror partitions were to fail. I manually install the bootloader (GRUB) to all drive's MBRs so that it can boot as long as I move one of the good drives to the controller port that the BIOS uses as the boot disk. I do not tend to operate the RAID5 sets with spares, because I usually do RAID5 with just three disks.

    In practice, I've had one "catastrophic" failure on a fileserver that had been running unattended for months. One SATA controller port lost its marbles and disappeared from the OS, ejecting that drive from all RAID sets. Then, one of the remaining drives actually developed bad blocks before I noticed this! Then, to top it off, it got rebooted by an eager assistant and the OS tried to fsck the volume that was running with just the one good drive and the one with bad blocks. This caused it to break the array and since the other drive had been forgotten, that one no longer was in sync with the remaining good drive and rebuild was impossible.

    The saving grace was that because each drive was sliced into partitions, only the / filesystem was destroyed in this process. The /home partitions on each of the remaining good drives were still untouched because the system tried to fsck the / filesystem first and never made those other writeable. So, reinstalling the OS was sufficient to recover and then let the OS rebuild the data volumes on a new replacement drive. :o As drives get larger, this use of smaller RAID5 slices is valuable as the drives often develop bad blocks in just small regions at first, and you do not need to eject all partitions immediately (even though you should replace the drive ASAP). This means there is less chance of multi-disk failures affecting a particular volume... you might be able to rebuild different volumes with their good partitions off of different mixtures of partially-bad drives.

    Running LVM on top of this can help recombine the slices into larger spaces, but I personally would feel more confident with my bulk data split into separate file trees on each volume, so I can at least be certain to recover one sub-tree at a time if I really did not keep backups. In practice, I actually keep local backups on the same RAID set (to protect against accidental deletes) and then rsync-based mirrors to another similarly configured system at another location (to protect against OS corruption, fire, flood, etc).

    One last comment: if you do go with software RAID, it is good to enable the periodic SMART tests that will cause drive surface scans to happen. This will help force bad blocks to be detected early, rather than the first time you try to access some infrequently used bulk data. This helps minimize the chance of data loss since you can replace the drive before more problems sneak up on you.

  21. The behavior you are talking about is called "reshape" in the Linux software RAID lingo. It is apparently available as of kernel 2.6.17 and recent mdadm versions to support it. I'd suggest browsing around on these keywords "raid5 mdadm reshape" to get a feel for the state of things. Given that this is new support done in the past 6 months, I'd be cautiously optimistic about trying it. You do keep backups of your raid sets too, don't you? :o

    In the past, I've always taken care to make a set of RAID5 volumes that span the same disk set, and keep my files split over multiple mount points. This way, I could manually reshape by shifting one volume's contents into free space on another volume, then destructively reshaping one volume at a time, formatting, and shifting the content back into place. This requires having enough free space to work with and some carefuly thought to make sure the whole activity will not paint you into a corner...

    Some people even do this via LVM e.g. put one large, growable filesystem across a set of RAID5 volumes, so they can just unpin and restore one RAID5 section at a time underneath the single large LVM volume. This always seemed like one too many layers for my taste, as it obfuscates the risk of managing the free space underneath...

    One note, all of these methods require a complete read/write cycle on all disks to rearrange the parity. Don't expect the reshape to go quickly. Making this slow process safe across crashes/reboots/etc has been one of the things making it slow for development of the feature.

  22. Like Tywais said, the usual luck is that you encounter a sloppy attack and detect it from within the system. Rootkit scanning is roughly analogous to trying to "sweep your office for listening devices", assuming someone has already broken in, whereas firewalls and antivirus are more like locking your doors and having guards at the perimeter of a secured area. You'd hope your scanner isn't just a placebo with a comforting green light that always comes on after you press the button.

    But, because you cannot really inspect the computer without its cooperation, the scanning is more like asking your business partner, "look within yourself, and tell me, can I trust you?" If you cannot, would you expect him to tell you so? The only way to detect a rootkit is to reboot the computer with a trusted medium, such as a physically write-protected hard drive or a CD-ROM and run a comparison of the disk files to a known-good reference copy. (Or remove the suspected drives and inspect them with another trusted computer.) This option would be truly wonderful for those business dealings, wouldn't it? Just vivisect and analyze the guy under a microscope before continuing the transaction... :D Similarly, the only sane recovery from suspected rootkit incidents is to install a fresh OS and then carefully restore your data from backups (particularly if your data could contain viruses/rootkits itself). To continue the analogy, this is sending your ambiguous business partner to sleep with the fishes, and then carefully training up his replacement. Of course, once the new guy starts having contact with the outer world, you start to wonder whether his training was sufficient... you'd better start preparing his replacement too...

    To be honest, I don't bother scanning for rootkits on my Linux machines, because I think I apply due diligence to keeping them updated with security patches and "keeping the doors locked" in the first place. Regular backups protect the data in case I have to sacrafice the computer after an attack. But, I don't trust Windows for all the anti-malware software in the world. I avoid running it at all most of the time, and when I do, I run it in a padded room with no sharp objects or shoe-laces, and then I sacrafice it and replace it with a clone after each use. :o

  23. Try gathering a set of traceroute results to the NECTEC server during the fast and slow phases, and look for a change in the path. If the path changes at each phase, i.e. flipping back and forth between a fast and slow set of routers, then maybe there is some misconfigured route management. In that case, providing them the traceroute information with IP addresses of their routers might be useful, but this depends on you getting in contact with someone far enough up their support/engineer ladder... even if the path does not change, the traceroute may be useful if it shows a particular hop in the path where a large latency is introduced during slow phases but not during fast phases.

  24. Well, jitter (amount that latency changes over time) is the main obstacle to real-time protocols like VOIP, once you get above a minimum bandwidth such as about 9600 bits/second required for the GSM codec. I've run the GSM codec over a GPRS link with a 2 second latency, and it was tolerable once you got used to the "mission to Mars" delay between speaking and hearing your friend's response.

    As Priceless mentioned, it is queueing theory which would show how all these things are related together... it's a very complicated topic but the main point is that the Thai ISPs seem to be hopelessly biasing their system towards aggregate bandwidth utilization, to the exclusion of responsive behavior for any individual application. As long as all the peer-to-peer applications are intentionally making it difficult to classify bulk traffic separately from interactive traffic (with encryption, random port assignments, etc.), there is little solution other than to start treating all traffic as either bulk or interactive and setting your queueing policies appropriately.

  25. As to the question of why this might happen... you have probably observed a public-address system generate "feedback" before, where a microphone picks up a sound from one of the speakers and it goes back through the amplifer, eventually generating a loud, high-pitched noise? Or you've been stuck in a traffic jam and finally come to the head of it where traffic thins out, only to see that there is no obstruction, but the traffic has formed a memory of a previous obstruction, passed from car to car through the flashing brake lights and over-reactive drivers?

    Well, the same feedback principle can go into action in networks when there is a load that reacts to the behavior of the network. Instead of there being an amplifier re-generating the signal, there may be thousands of poorly behaving clients who attempt to gobble up bandwidth until something in the network breaks. Then, they "back off" and after a delay they try again. This pulsing could be the result of such a feedback cycle, repeatedly overloading some part of the network. To make matters worse, if the network also tries to respond to this overload condition by adjusting itself, then you can get a stronger tug-of-war between the client and network changes. Another real-world example would be fans stampeding a stage at a concert, or clogging exits in a evacuation... often the crowd will heave forward and back because people "retry" en masse.

    There is a bit of art and science to trying to minimize these effects. Most Internet protocols and clients are carefully designed and analyzed with statistical models meant to have good behavior, for example having randomized back off so that everybody does not try to rush the gates again each time there is a blockage. Instead, some will try earlier and some later, so that a smoother flow of clients retrying over time. There is also work done by the network providers to try to filter and smooth these spikes by carefully tuning the way the network responds.

    Quite likely, something at your ISP is misconfigured and unable to smooth these spikes among the users. I think that a poorly configured traffic shaper could do it, if it is too aggressive about shutting down flow and then quickly resetting to a fast rate (it would be much better to shut down gradually and take longer to open back up, so that client applications will adapt to a new steadystate bandwidth). In the past, I've observed much slower pulses (15-30 minute cycle time) when something called "route flapping" occurs; in this case, some misconfiguration causes the ISP to misroute traffic for a while, and then restore it. Often, these route flap events will affect one set of destinations, and then another, and so on in a cyclic manner; and this can be triggered by different ultimate causes.

    By the way, the reason behind partial power blackouts is from the same problem. If the power system is overloaded and shuts down, turning it back on with all of the existing (too high) load will just cause repeated failures. So instead, the power grid shuts down some areas to bring the total load into an acceptable range for the generating capacity. A brown-out would be when they just let the overload continue, and the supply voltage drops until nobody gets proper clean power! I hope you can see the analogy to how the overloaded Internet capacity can be shared by brownouts for everyone or rotating blackouts for a few, etc. Whereas we'd probably prefer blackouts to brownouts with electrical power, we probably prefer smooth brownouts with the Internet.

×
×
  • Create New...