There have been many times through the years when I've seen banner ads and wondered,
"Can any of them actually be useful?"
or,
"How many of these could I click on before my computer becomes unusable?"
If I had very disposable income, I'd try to do just that: log onto a different network on a computer not my own, not loaded with any anti-malware programs or ad-blockers, and click on the first banner ad I see - then from there, just keep clicking and see how deep the rabbit hole goes.
I actually tried once out of boredom and it goes deep. Half the time each ad just leads to a new ad and you just keep registering with more and more websites in the hope to eventually get that Ipod nano (yes thats about the time period I tried to see)
I suppose if you click a ad for a major company it will likely just take you to their store though.
Very vaguely related story of mine: I had an email box dedicated to spam. As in, I created it with the explicit intent to collect spam there, just to see how far I can reach. I started small - I did sign up for several random services by sort of doing what you suggested - I followed random ads and filled in surveys and made several accounts to some not-quite-but-at-least-borderline shady places. I scrapped my OS afterwards[footnote]well, that's making it sound more dramatic than it was - I was about to re-install anyway, so I thought that would be a fun little thing to do.[/footnote] and made sure the email box didn't have any spam filters on[footnote]it was one of those free webmail providers who also had some services related - spam filter was one of them, and by default it was on. There wasn't really a lot of customisation - at the time of creating the email box, there were something like 3-4 levels you could set it to from "off" to "paranoid".[/footnote] and then I waited.
It was quite a success at first. I started getting emails...and emails...and emails...and a lot of emails. Got to a thousand quickly then even quicker to about three thousands. I kept getting more and more. Occasionally I'd look at the emails - not the contents (mostly because they were boring) but the titles were awesome. You'd see the failed attempts of the spambot - people "replying" to me, people asking "me" (whichever name I had given here or there) to look at their photos, offers about various, ahem, enlargement treatments and pills, as well as surprisingly lots of stolen Rolexes and so on. One of the best titles I saw was really funny and I just had to save it - it read "Don't get mad, get Valium!".
Things went along nicely and I was getting more and more spam. I could have even tried to track how my email was being spread through various spam services as I was starting to get more and more varied spam without ever signing up for anything new. My "bank accounts" needed verification, my "insurances" too, my "Battle.net account" was apparently being hacked and what have you.
It all went well for a while until...disaster struck. At some point I stopped getting new emails. Well, at least not with the same pace as before - I had reached close to 30k, I believe when it slowed down to a crawl with me only receiving maybe ~150 new emails a week (it fluctuated, but that was about average). I felt so disappointed - it was as if the spammers didn't like me. I suppose in retrospect it was probably caused by a botnet being busted but still - I liked receiving those emails.
At that point, I did sign up for few new sources of spam but nothing much. Didn't really get the quantity of emails I was receiving before but I slightly increased the amount I was getting.
Then disaster struck again. The email provider decided to enforce spam filters to be on. You could, at most, loosen them, but never switch them off then. What's more, the spam filters were now retroactively going through the email. Well, either that, or they went through mine just because every message was unread. At any rate, that really culled the amount of spam I got and hampered the incoming one quite a bit. I sort of felt responsible, as I had literally tens of thousands pure spam emails in my box. I don't know if I was part of the reason for tightening the spam protection, but I always thought I might have been.
At any rate, after that my little experiment was ruined and I never really came back to it. The email box is still around and still gets spammed, only it's a really low amount. I decided to check it now, and it currently sits at 1322 unread emails. I can't remember the last time I checked it, nor what amount it had then, but that's just nothing.
Not really much to do with the thread but I thought the story might be entertaining enough to have a place in it.
For reference, the box was created back in 2006 - it's almost a decade old.
You took the experiment far further than I thought I'd see in this thread. Well done.
OT: Well, DoPo has already taken this to its logical extreme I suppose. The only next step is to rig up a computer with an automated spam/pop-up prompt script or something like that.
I have read something... well.. many years ago. Windows XP was pretty new. A few guys had freshly installed a Windows 98 on a PC and hooked it op to the net. No firewall, no anti-vrius software, nothing. Time to fatal system corruption was, i think, about 2,5 hours if i remember right. But that was a different time and a very out-of-date (and very vulnerable) system.
But yeah, inviting all those java-script shit onto your system might be very bad. (Hell, we have one guy in the office who needed a hack... and he KNEW that he got an infected, but working software - so he used it (Can't be that bad)- He was cursing for about a day while trying to stem the tide of malware, adware and strange browser-corrupting weirdness...)
Which reminds me. Let's see how many emails i have in my old email-account. Huh... only 1300? Did yahoo delete my spam? Weird. When i left it is was at about 2500+, if i remember right. Couldn't really use it anymore. Now it's nearly clean and still goes back to 2001. Strange.
I have read something... well.. many years ago. Windows XP was pretty new. A few guys had freshly installed a Windows 98 on a PC and hooked it op to the net. No firewall, no anti-vrius software, nothing. Time to fatal system corruption was, i think, about 2,5 hours if i remember right. But that was a different time and a very out-of-date (and very vulnerable) system.
Windows 98 wasn't that out of date when XP came out, also it'd been constantly receiving updates, especially security updates. I'll tell everyone now, an older operating system that's still supported, that's up to date, is always more secure than the newest operating system. The reason being is that the older operating system has had time to have more of it's vulnerabilities exposed and fixed. An old OS fresh install with no updated will be very open to exploit though, because the vulnerabilities to it are well known at that point. Still there's a tipping point, eventually an OS gets so old and so obscure that nobody targets it anymore, making it extremely safe.
I ran a windows 95 PC to death on a totally fresh install, when I got my Windows 98SE machine, the 95 tower was fully updated and all told for full contamination, corruption and irreparable crash was about a month. Still I lost nothing, reinstalled 95, and stuck it in the closet.
For me now, I'm using Linux, so if I had the disposable income I'd actually like to see how long it takes to destroy a Lubuntu install without risking my data. I suspect that if I were to keep it up to date though it'd be effectively indefinitely, especially if I didn't install WINE for viruses to run in. Because Linux is such a small market share, there isn't enough incentive to target it with malware, spyware, and adware.
For me now, I'm using Linux, so if I had the disposable income I'd actually like to see how long it takes to destroy a Lubuntu install without risking my data. I suspect that if I were to keep it up to date though it'd be effectively indefinitely, especially if I didn't install WINE for viruses to run in. Because Linux is such a small market share, there isn't enough incentive to target it with malware, spyware, and adware.
Yeah, I suspect that may be the main reason. Some people like to claim Linux is somehow inherently secure, but the reality is more likely that it largely just isn't worth targeting.
The same thing with Mac OS.
It doesn't seem worthwhile to attack that for most purposes. Now, sure, if you're trying to target web servers specifically, then you might have more of an interest in attacking Linux systems, but otherwise? Why bother?
Open source is a double-edged sword in that regard. While it makes it easier to fix problems, the reason for that is because it makes them easier to find. Which works both ways of course.
Then again with proprietary software does increase the probability that the only people that realise a program is vulnerable are those with dubious intentions...
This makes me curious now... How many 'proof of concept' hacks/exploits/whatever exist for linux systems just to make a point?
It seems like it would be an awfully big temptation for certain groups to 'hack the unhackable'...
After all, that's the fundamental reason DRM cracks exist. Not so people can play pirated software, but just because some people really like the challenge of it.
I did watch this video recently where someone demonstrated they had malware on their Linux system...
So it's definitely not impossible, unlike what some seem to want to claim.
Still, no system is immune to viruses. The main question is the source of infection.
Getting a computer to do something it really shouldn't is... Almost trivial for any computer programmer.
Getting it to do that without the user noticing/directly assisting...
Is another matter entirely.
I can crash my computer (intentionally) in lots of ways. It's easy. I can also cause permanent damage to the files or the operating system.
Whether it's windows, mac, linux or anything else, if you have programming tools (and admin rights), you can almost always destroy the system one way or another...
So... That really shows what the challenge is I suppose.
Getting into the system is the hard part.
Once you get that, the rest is not that challenging.
For me now, I'm using Linux, so if I had the disposable income I'd actually like to see how long it takes to destroy a Lubuntu install without risking my data.
The code you're looking for is "sudo apt-get autoremove". Careless use of autoremove will render your installation thoroughly and unrecoverably ruined while leaving your documents intact.
Well, he did say that Windows 98 was a new install.
Still, it has less to do with the Windows 98 vs XP at the time XP came out and more to do with the fact that Microsoft didn't design Windows to be particularly secure. That isn't so much stupidity on Microsoft's part as a product of its time. A lot of technology from Windows XP and before came from a time when security wasn't really on the minds of a lot of people largely because a lot of the security problems we face today weren't present. Microsoft did come in a little late, but Windows has come a long way over the years, just like the Internet in general has come a long way since the days where it was assumed everyone could be trusted.
renegade7 said:
The code you're looking for is "sudo apt-get autoremove". Careless use of autoremove will render your installation thoroughly and unrecoverably ruined while leaving your documents intact.
Are you sure its "sudo apt-get autoremove"? That shouldn't be a dangerous thing to run, as it is only supposed to deal with dependencies that are no longer needed.
That said, I do remember there being a (I believe dpkg) command that basically flags every non-critical package for removal. That would certainly "destroy" the system while maintaining all the data. I can't remember what it is now, though, unfortunately.
Are you sure its "sudo apt-get autoremove"? That shouldn't be a dangerous thing to run, as it is only supposed to deal with dependencies that are no longer needed.
That said, I do remember there being a (I believe dpkg) command that basically flags every non-critical package for removal. That would certainly "destroy" the system while maintaining all the data. I can't remember what it is now, though, unfortunately.
I think it might have been the autoremove option in the aptitude UI, or it might have been with the --purge flag, I don't remember right off the top of my head, but it completely ruined my install last year when I was trying to get the fans on my MacBook Air to work.
And yes, that's how it happened. Please don't ask.
If you want to kill your computer, that's your business, but leave me out of it. I can tell you the result will be that eventually, you'll find a blazing-hot virus and that will be the end of it.
For me now, I'm using Linux, so if I had the disposable income I'd actually like to see how long it takes to destroy a Lubuntu install without risking my data.
The code you're looking for is "sudo apt-get autoremove". Careless use of autoremove will render your installation thoroughly and unrecoverably ruined while leaving your documents intact.
All that command does is remove outdated packages, I use it all the time because for some reason my install buggers up the part of the auto update that removes unneeded, out dated, Linux images. So I end up with a lot of obsolete Linux images cluttering up my root folders.
Linux can be hacked, but there are so many different distributions that getting into each one is somewhat different from every other ones. This is especially true for Malware, because the same trick doesn't work on every distribution. Since Linux itself is such a small market share, the fact that it's divided in to a ton of even smaller market shares that require different routes for intrusions. Still that's where the security comes from, it's just not really worth bothering with. Though the recent BASH vulnerability was pretty scary.
Yes... Does remind me though, back when I was dealing with windows XP, the only program I ever wrote that crashed my computer...
Was because I wrote a 1 instead of a 0 in an 'if' statement in code that called a DirectDraw graphics routine.
Turns out getting graphics code wrong will bluescreen your computer...
Linux can be hacked, but there are so many different distributions that getting into each one is somewhat different from every other ones. This is especially true for Malware, because the same trick doesn't work on every distribution. Since Linux itself is such a small market share, the fact that it's divided in to a ton of even smaller market shares that require different routes for intrusions. Still that's where the security comes from, it's just not really worth bothering with. Though the recent BASH vulnerability was pretty scary.
That's a good point. I guess since it's such a fragmented environment you can't even really treat it as a single system.
I guess it depends on where the vulnerabilities lie.
Kernel exploits are clearly the biggest risk.
(For one thing, that could hit android, Gnu/linux builds, and tons of embedded devices.)
You could of course wonder how vulnerable linux (and BSD for that matter) are to the absolutely horrific things you could apparently do to unix because of it's nature.
The unix haters mailing list is hilarious, regardless of your opinion of unix-like systems.
http://www.mindspring.com/~blackhart/
http://web.mit.edu/simsong/www/ugh.pdf
Ah, before it even starts on any actual examples of why they hate it, the first chapter has this to say:
[HEADING=1]Unix[/HEADING]
[br][HEADING=2]The World's First Computer Virus[/HEADING]
Viruses compete by being as small and as adaptable as possible. They
aren't very complex: rather than carry around the baggage necessary for
arcane tasks like respiration, metabolism, and locomotion, they only have
enough DNA or RNA to get themselves replicated. For example, any particular
influenza strain is many times smaller than the cells it infects, yet it
successfully mutates into a new strain about every other flu season. Occasionally,
the virulence goes way up, and the resulting epidemic kills a few
million people whose immune systems aren't nimble enough to kill the
invader before it kills them. Most of the time they are nothing more than a
minor annoyance?unavoidable, yet ubiquitous.
The features of a good virus are:
? Small Size
Viruses don't do very much, so they don't need to be very big. Some
folks debate whether viruses are living creatures or just pieces of
destructive nucleoic acid and protein.
4 Unix
? Portability
A single virus can invade many different types of cells, and with a
few changes, even more. Animal and primate viruses often mutate to
attack humans. Evidence indicates that the AIDS virus may have
started as a simian virus.
? Ability to Commandeer Resources of the Host
If the host didn't provide the virus with safe haven and energy for
replication, the virus would die.
? Rapid Mutation
Viruses mutate frequently into many different forms. These forms
share common structure, but differ just enough to confuse the host's
defense mechanisms.
Unix possesses all the hallmarks of a highly successful virus. In its original
incarnation, it was very small and had few features. Minimality of design
was paramount. Because it lacked features that would make it a real operating
system (such as memory mapped files, high-speed input/output, a
robust file system, record, file, and device locking, rational interprocess
communication, et cetera, ad nauseam), it was portable. A more functional
operating system would have been less portable. Unix feeds off the energy
of its host; without a system administrator baby-sitting Unix, it regularly
panics, dumps core, and halts. Unix frequently mutates: kludges and fixes
to make one version behave won't work on another version. If Andromeda
Strain had been software, it would have been Unix.
Unix is a computer virus with a user interface.
They don't mess around when they say they hate unix, right? XD
Though... Some of the examples are much, much better.
In... A terrifying sort of way...
Users care deeply about their files and data. They use computers to generate,
analyze, and store important information. They trust the computer to
safeguard their valuable belongings. Without this trust, the relationship
becomes strained. Unix abuses our trust by steadfastly refusing to protect
its clients from dangerous commands. In particular, there is rm, that most
dangerous of commands, whose raison d'etre is deleting files.
All Unix novices have ?accidentally? and irretrievably deleted important
files. Even experts and sysadmins ?accidentally? delete files. The bill for
lost time, lost effort, and file restoration probably runs in the millions of
dollars annually. This should be a problem worth solving; we don't understand
why the Unixcenti are in denial on this point. Does misery love company
that much?
I too have had a similar disaster using rm. Once I was removing a file
system from my disk which was something like /usr/foo/bin. I was in /
usr/foo and had removed several parts of the system by:
% rm -r ./etc
% rm -r ./adm
?and so on. But when it came time to do ./bin, I missed the period.
System didn't like that too much.
Unix wasn't designed to live after the mortal blow of losing its /bin directory.
An intelligent operating system would have given the user a chance to
recover (or at least confirm whether he really wanted to render the operating
system inoperable).
Unix aficionados accept occasional file deletion as normal. For example,
consider following excerpt from the comp.unix.questions FAQ:3
6) How do I ?undelete? a file?
Someday, you are going to accidentally type something like:
% rm * .foo
and find you just deleted ?*? instead of ?*.foo?. Consider it a
rite of passage.
Of course, any decent systems administrator should be doing
regular backups. Check with your sysadmin to see if a recent
backup copy of your file is available.
Impossible Filenames
We've known several people who have made a typo while renaming a file
that resulted in a filename that began with a dash:
% mv file1 -file2
Now just try to name it back:
% mv -file2 file1
usage: mv [-if] f1 f2 or mv [-if] f1 ... fn d1
('fn' is a file or directory)
%
The filename does not cause a problem with other Unix commands because
there's little consistency among Unix commands. For example, the filename
?-file2? is kosher to Unix's ?standard text editor,? ed. This example
works just fine:
% ed -file2
4347
But even if you save the file under a different name, or decide to give up on
the file entirely and want nothing more than to delete it, your quandary
remains:
7
The ?434? on the line after the word ?ed? means that the file contains 434 bytes.
The ed editor does not have a prompt.
Consistently Inconsistent 29
% rm -file
usage: rm [-rif] file ...
% rm ?file
usage: rm [-rif] file ...
% rm ?????
usage: rm [-rif] file ...
% rm *file2
usage: rm [-rif] file ...
%
rm interprets the file's first character (the dash) as a command-line option;
then it complains that the characters ?l? and ?e? are not valid options.
Doesn't it seem a little crazy that a filename beginning with a hypen, especially
when that dash is the result of a wildcard match, is treated as an
option list?
Unix provides two independent and incompatible hack-arounds for eliminating
the errantly named file:
% rm - -file
and:
% rm ./-file
The man page for rm states that a lone hypen between the rm command
and its first filename tells rm to treat all further hypens as filenames, and
not options. For some unknown reason, the usage statements for both rm
and its cousin mv fail to list this ?feature.?
Of course, using dashes to indicate ?please ignore all following dashes? is
not a universal convention, since command interpretation is done by each
program for itself without the aid of a standard library. Programs like tar
use a dash to mean standard input or standard output. Other programs simply
ignore it:
% touch -file
touch: bad option -i
% touch - -file
touch: bad option -i
Amuse Your Friends! Confound Your Enemies!
Frequently, Unix commands give results that seem to make sense: it's only
when you try to apply them that you realize how nonsensical they actually
are:
next% mkdir foo
next% ls -Fd foo
foo/
next% rm foo/
rm: foo/ directory
next% rmdir foo/
rmdir: foo/: File exists
Here's a way to amuse and delight your friends (courtesy of Leigh Klotz).
First, in great secret, do the following:
% mkdir foo
% touch foo/foo~
Then show your victim the results of these incantations:
% ls foo*
foo~
% rm foo~
rm: foo~ nonexistent
% rm foo*
rm: foo directory
% ls foo*
foo~
%
Last, for a really good time, try this:
% cat - - -
[HEADING=1]The Unix Attitude[/HEADING]
We've painted a rather bleak picture: cryptic command names, inconsistent
and unpredictable behavior, no protection from dangerous commands,
barely acceptable online documentation, and a lax approach to error checking
and robustness. Those visiting the House of Unix are not in for a treat.
They are visitors to a U.N. relief mission in the third world, not to Disneyland.
How did Unix get this way? Part of the answer is historical, as we've
indicated. But there's another part to the answer: the culture of those constructing
and extending Unix over the years. This culture is called the
?Unix Philosophy.?
The Unix Philosophy isn't written advice that comes from Bell Labs or the
Unix Systems Laboratory. It's a free-floating ethic. Various authors list
different attributes of it. Life with Unix, by Don Libes and Sandy Ressler
(Prentice Hall, 1989) does a particularly good job summing it up:
? Small is beautiful.
? 10 percent of the work solves 90 percent of the problems.
? When faced with a choice, do whatever is simpler.
According to the empirical evidence of Unix programs and utilities, a more
accurate summary of the Unix Philosophy is:
? A small program is more desirable than a program that is functional
or correct.
? A shoddy job is perfectly acceptable.
? When faced with a choice, cop out.
Unix doesn't have a philosophy: it has an attitude. An attitude that says a
simple, half-done job is more virtuous than a complex, well-executed one.
An attitude that asserts the programmer's time is more important than the
user's time, even if there are thousands of users for every programmer. It's
an attitude that praises the lowest common denominator.
[HEADING=2]Shell crash[/HEADING]
The following message was posted to an electronic bulletin board of a
compiler class at Columbia University.2
Subject: Relevant Unix bug
October 11, 1991
Fellow W4115x students?
While we're on the subject of activation records, argument
passing, and calling conventions, did you know that typing:
!xxx%s%s%s%s%s%s%s%s
tcsh Csh with emacs-style editing.
ksh KornShell, another command and programming language.
zsh The Z Shell.
bash The GNU Bourne-Again SHell.
1
Ironically, egrep can be up to 50% faster than fgrep, even though fgrep only uses
fixed-length strings that allegedly make the search ?fast and compact.? Go figure.
2
Forwarded to Gumby by John Hinsdale, who sent it onward to UNIX-HATERS.
150 csh, pipes, and find
to any C-shell will cause it to crash immediately? Do you know why?
Questions to think about:
? What does the shell do when you type ?!xxx??
? What must it be doing with your input when you type
?!xxx%s%s%s%s%s%s%s%s? ?
? Why does this crash the shell?
? How could you (rather easily) rewrite the offending part of the shell
so as not to have this problem?
MOST IMPORTANTLY:
? Does it seem reasonable that you (yes, you!) can bring what may be
the Future Operating System of the World to its knees in 21 keystrokes?
Try it. By Unix's design, crashing your shell kills all your processes and
logs you out. Other operating systems will catch an invalid memory reference
and pop you into a debugger. Not Unix.
Perhaps this is why Unix shells don't let you extend them by loading new
object code into their memory images, or by making calls to object code in
other programs. It would be just too dangerous. Make one false move
and?bam?you're logged out. Zero tolerance for programmer error.
Date: Thu, 31 Jan 91 14:29:42 EST
From: Jim Davis <[email protected]>
To: UNIX-HATERS
Subject: Expertise
This morning I read an article in the Journal of Human-Computer
Interaction, ?Expertise in a Computer Operating System,? by
Stephanie M. Doane and two others. Guess which operating system
she studied? Doane studied the knowledge and performance of Unix
novices, intermediates, and expert users. Here are few quotes:
?Only experts could successfully produce composite
commands that required use of the distinctive features of Unix
(e.g. pipes and other redirection symbols).?
In other words, every feature that is new in Unix (as opposed to being
copied, albeit in a defective or degenerate form from another operating
system) is so arcane that it can be used only after years of arcane
study and practice.
?This finding is somewhat surprising, inasmuch as these are
fundamental design features of Unix, and these features are
taught in elementary classes.?
She also refers to the work of one S. W. Draper, who is said to have
believed, as Doane says:
164 csh, pipes, and find
?There are no Unix experts, in the naive sense of an exalted
group whose knowledge is exhaustive and who need not learn
more.?
Here I must disagree. It is clear that an attempt to master the absurdities
of Unix would exhaust anyone.
Some programs even go out of their way to
Anyway, Can't find my favourite one. (it had something to do with renaming files to '.', creating a bug rendering your entire filesystem unreadable. The command was cryptic, and really close to a different command with a very useful function.)
I could go on, but there's so many of these I'd be here forever. XD
You would hope modern unix-like systems have fixed at least some of these issues, but some of them appear to be fundamental aspects of the design.
(eg. pipes and no real restrictions on filenames to give some random more obvious examples.)
I don't doubt a lot of stuff has been fixed, but comparing the contrast between what people say about linux and BSD with what these mailing lists said about the history of unix...
... Ah, unix... XD
But on another point, if you take these complaints seriously, then the pervasiveness of unix becomes quite scary.
Because it's everywhere. Never mind BSD and Linux.
Mac OS is a unix system now.
Windows contains a posix compliant unix subsystem.
Android is also linux...
Most server systems that don't run linux or BSD run some other, older variety of Unix build...
If unix is as bad as that document makes out, that's a terrifying thought, right? XD
Chances are it isn't, of course...
But it does make you wonder.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.