Our switching center in Kitchener (Ontario, Canada) hosted 70,000 lines
Within a few months I worried that my electronics skills were going to fade so I landed a part-time job as a repair technician at Mother's Music in Waterloo owned by David Boehm. Work included:
The handwriting was on the wall for analog technology so in 1976 I returned to evening classes at Conestoga College to catch up on the continuing semiconductor revolution which included TTL and CMOS in their two Digital Electronics courses. I also learned to program in BASIC and COBOL on their HP-3000 minicomputer.
Bell Canada became began shifting to digital in 1978 when they replaced "paper tape based" long distance billing recorders with minicomputers (Interdata Model 70) and industrial controllers ("TeleSciences SRS-1200 Data Recorder") which employed on HP-7970E 9-track tape decks. Bell Canada was looking for someone who wanted to maintain this stuff. Since I was attending night classes in digital electronics at Conestoga College, I was offered the job.
Also in 1978, Bell Canada introduced a minicomputer system named TELCON (TELetype CONcentrator) which was based upon the PDP-11/04. This beast replaced a room full of 32 paper terminals (ASR-35 teletypes and LA120 printers) which were connected to remote SL-1 and SP-1 switches. This led to other projects based upon: PDP-11/23 , PDP-11/44 , PDP-11/73 and PDP-11/84)
link: dips-n-certs (involved a huge amount of corporate training provided by: DEC, HP, etc.)
In 1978 I was contacted by Fred Hoffman of Bits-n-Bytes (a retail computer store in Waterloo, Ontario) to repair a HeathKit-H8 computer along with a HeathKit-H9 terminal with a horrible key-bounce problem. Fred asked for a quote so I offered to fix it for free provided I could keep both units for a month. That event allowed me to learn Benton Harbor BASIC and Benton Harbor DOS.
Later that same year, I purchased a 48k Apple2 with a 16k Language Card and two 5.25-in floppy drives. I learned to program: UCSD Pascal, FORTRAN 77, 6502 Assembly and Sweet16
The PS3 game console architecture consisted of one 3.2 GHz Cell Broadband Engine with one PPE and six SPE chips. This gaming console was so powerful that they could be used to aid in scientific research (see my folding-at-home page). Some universities would string together 50 together to produce a PS3-based super computer.
So in 2008 I purchased a PS3 (with a folding-at-home screen saver) from Future Shop which came bundled with Grand Theft Auto IV. This game was a technological epiphany because you can drive around Liberty City for hundreds of hours, in various cars, listening to 18 radio stations, while experiencing different weather conditions. At certain points you can exit your vehicle to 'buy a hotdog' or 'board any number of subways'. A self-contained digital world on a single Blu-ray costing $59.
Jump ahead to 2012 when I was borrowed a copy of Batman: Arkham City from my nephew. This self-contained digital world is more of an interactive comic book where you play the title role. While many games only entertain for 6-10 hours, both Batman: Arkham City and Grand Theft Auto IV can require more than 100 hours if you do all the side missions. So at the original price of $59 dollars this entertainment will cost you 59 cents per hour. Way more cost effective than renting a movie.
I was in Sam the Record Man (Kitchener) when I first heard Switched-On Bach by Walter Carlos. At the time I was misled into believing this was music produced by a programmed computer ("computerized" was the colloquial phrase). It turned out that this 1968 recording was painstakingly assembled, note-by-note, on a Moog synthesizer employing a keyboard and sequencers to operate voltage-controlled oscillators and filters. Electronic "YES" but computerized "NO". On top of that, this was an analog recording of a instrument with an analog audio output. None of the instruments you hear were real, yet the associated harmonics could (when cranked up) blow the output transistors of most solid state audio amplifiers.
A similar experience occurred a few years later when I heard the 1974 album Snowflakes Are Dancing by Isao Tomita featuring the composition of Claude Debussy
In 40 years (1970-2010) humanity has gone from "really good fake musical instruments which a only few hundred could play" to today's "video gaming industry which employs tens of thousands while raking in $25 billion annually". The total number of XBOX-360 and PS3 machines sold by 2012 exceeds 145 million and this number doesn't include other gaming consoles or the number of people playing games on high end PCs.
- In 2012, video game development employed ~18,000 Canadians and added ~2 billion to Canada's economy. Impressive since Canada is only the number three country in game development behind Japan and USA. Also remember that video game technology spills over into special effects for the movie industry.
- In 2012, Call of Duty: Black Ops II was released which resulted in sales of $500 million (yep half a billion) in the first 24 hours. A second half billion was pulled in over the next 17 days. It must be pointed out that we never hear numbers like these movie box-office sales. Many parents associated today's 'video games' with yesterday's BOOB TUBE) but everyone must admit that video game development employs a lot of people
Now I could have also mentioned lots of other technological changes including:
Most C programming language stories begin with the
creation of UNIX by employees at Bell
Labs in 1969. The original OS was written in macro
assembler and was buggy. To make matters worse, Bell Labs was already working with multiple computers including with the
18-bit PDP-7 but were planning to migrate to a 16-bit PDP-11
(ordered but not yet delivered so they were in planning mode)
1) They solved the buggy problem by creating the "B" language (which was based upon BCPL). This evolved into the "C" language
which would allow UNIX to be rewritten using "a computer language" rather than a macro assembler.
2) They solved the migration problem by using a CPU-specific code generator tied the back-end. Now both UNIX and "C" were
portable.
comment: processor architecture is defined by the macro assembler programmer's view of the CPU. For
example, the PDP-11 was called 16-bit because it employed eight 16-bit general purpose registers (GPR) even though this machine
could address larger amounts of memory dependent upon the attached bus (Unibus
programmable mapping resulted in: 18-bit, 22-bit and 24-bit memory address spaces)
Boot-Up of a Portable Software Paradigm +---------------------------+ Phase 1 +-->| PDP Assembler -> UNIX +-->+ +---------------------------+ | | +---------------------------+ | Phase 2 +<--+ UNIX and PDP Assembler +<--+Notes:
| + yields B which yields C | | +---------------------------+ | v +---------------------------+ Phase 3 +-->| UNIX and C -> better UNIX +-->+ ^ +---------------------------+ | | Phases 5a + 5b ------>--------+ | +---------------------------+ | Phase 4 +<--| UNIX and C -> better C +<--+ +---------------------------+
Most people already know the story of how ARPA (now DARPA) funded the development of a self-healing digital communication network meant to survive a nuclear war. ARPANET began in 1969 but development accelerated in the early 1970s when Bell Labs began licensing UNIX to educational institutions for an incredibly low price (under monopoly rules, Bell Labs was "not allowed to be a software vendor" -or- "make any money vending software") which meant that the majority of ARPA-funded work slowly moved from assembler to C on UNIX. This also meant that anyone working on the ARPA project in C on UNIX could easily share their work with peers at other universities. By about 1980 it appeared that all these universities were producing network incompatibilities which would soon break everything so DARPA wanted...
At this point the story shifts to a gifted programmer at Stanford by the name of Bill
Joy who was working with "C" on a DEC platform known as VAX-11. By 1982,
Joy had developed all the software necessary to implement what we now refer to as TCP/IP
running on IPv4.
comment: click article-with-diagrams
and noticed the large number of PDP and VAX machines at that time
By the early 1980s, many universities had modified UNIX sufficiently that they were able to re-brand/re-publish/re-license it to others. The University of California at Berkeley was one such group to do this by offering BSD UNIX to other universities for free or corporations for $1000 (IIRC) which was incredibly low compared to commercial OS products. BSD also introduced Berkley Sockets which allows a programmer to read/write an internet connection in the same way programmers read/write file systems. Bell Labs copied this idea then produced the STREAMs libraries (IIRC) for the AT&T flavor of UNIX.
Some people in the IT industry today are still very critical of "C" (or UNIX/Linux) while they simultaneously promote their favorite language (or OS) but "I think" they are fighting a losing battle. Why? When universities became financially squeezed in the early 1970s, many were forced to be more frugal so only considered inexpensive or free alternatives which meant C and UNIX. Students now had access to all the source code of both, which meant they could improve the product, which had the unintended consequence of creating a "critical mass" of human talent. In this environment it didn't matter which language was better because a choice had already been made; then students entering the workplace then stuck with the software they already knew.
Question: Is "C" a high level language or a low level language?
Answer: Both. It is a low level language (think portable assembler) which becomes a high level language as soon as your program references external libraries via the #include directive.
Example 1:
I knew of many Nortel projects where conversion to "C" and UNIX increased stability while reducing costs (CALRS was once example which ran on BSD Unix). Nortel's flagship product at the time was DMS which was written in a Pascal variant called Protel (Procedure Oriented Type Enforcing Language). In the early days of DMS, Nortel was spending a lot of money each year training freshly minted university grads how to program in Protel. Since many of these people already knew how to program in "C", Nortel embarked upon an internal project to convert DMS from Protel to C. They did a fairly good job except that the changeover was done "flash style" rather than "gradually". Bugs and delays meant that Nortel's cash cow delivered virtually no revenue for 18 months. Oops!
Example 2:
We now know that 1988 was the year Dave Cutler left Digital Equipment Corporation for Microsoft. Cutler was responsible for the development of Windows-NT (New Technology) which was meant to (and did for a time) run on multiple architectures including: IA-32, MIPS, and Alpha, with plans for PowerPC, Itanium, AMD64 and ARM. Because this new OS was meant to run on multiple computer platforms, "C" was chosen because it was portable. Bill Gates (no idiot) was not convinced that 32-bit Windows-NT would be successful -or- would replace 16-bit Windows anytime soon. So unlike the Nortel FUBAR described above, Gates funded both projects then ran them in parallel (DECies vs. Microsofties). Perhaps this is where having a technologist at the helm of a company is better than a bean counter.
Example 3:
I didn't learn "C" programming until the summer of 1988. I was using Lightspeed C on a Macintosh and I remember thinking "this has got to be someone's idea of job protection". But it always produced small binaries so I thought it might have some advantages. Also, the concept of reusing your own code, or code written by others (free or purchased), via the #include mechanism seemed an obvious advantage. In subsequent years I noticed professional programmers getting really impressive results using "C" on IBM compatible PCs so I attended up the following evening classes at Conestoga College:
6811 assembler
In 1993 I did some contract work for a local (Kitchener/Waterloo) company described here. I designed the control board which employed a Motorola MC68HC11F1. All the software was written using a plain-text editor in 6811 Macro Assembler Notation on an Apple Macintosh. The binaries were generated using the uAsm 6811 cross-assembler from Micro Dialects. This approach worked well until the code exceeded 8K in size which introduced other problems (you always wanted to use branches for improved size and speed; but often needed to switch to jumps when the target destination was too distant).
Whitesmiths C
In 1995, I rewrote the whole thing for the Whitesmiths 68HC11 C Compiler/Assembler on an IBM-PC running on an Intel 80386. Although I loved programming in 6811 Macro, Whitesmiths "C" was a much more productive tool. Implementing startup and interrupt vectors was child's play with this package. Descendants of this compiler are still available from COSMIC Software ( http://www.cosmic-software.com ) but you can find cross-compilers and cross-assemblers available for every CPU chip still in production. Here are two of many:C++ introduces object oriented concepts to C which can only result in greater productivity with fewer bugs. For example:
Modern client software, like browsers (especially tabbed browsers) from all vendors would be impossible without C++. In fact, I suspect the whole client-server paradigm has been taken further with C++ than was possible with C or any other language. Be sure to think about object-oriented technology whenever you see something (JPEG, GIF, Java Plugin, WAV player) sitting in the middle of your web page.
Up until 2002, I was able to do all my application programming using HP-BASIC-1.7 Alpha for OpenVMS.
In 2002 I was tasked with building an interface into IBM's national ticketing facility in Lexington, Kentucky where the technology
of choice was IBM's MQseries
In 2010 I ran into a couple of situations where I had to directly interface with open-source software written in "C". One application involved interfacing an HP-BASIC application to OpenSSL. The second involved interfacing an HP-BASIC application to gSOAP. With most so-called "DEC languages", a developer can supply the compiler with command-line switches to control how variables are written to the symbol table which is used during linking. The appropriate "case control" switch doesn't exist with HP-BASIC-1.7 which means all symbols are up-cased. This means that a programmer needs to write a wrapper in order to facilitate linking. While this is possible, it might be more trouble than it is worth. Add to this the fact that HP-BASIC doesn't have all the data-types available to C/C++ (for example, there are no unsigned variables in HP-BASIC).
For me, it was easier to write the apps in "C" (HP C V7.3-009 on OpenVMS Alpha V8.4) then call the open-source software directly.
The two C programs I wrote (one client, one server) are fairly ugly because I used pointers to reference the XML structure buried within the SOAP packet. I found a few spare hours in 2013 to go back to gSOAP in order to play with suggestions for a table-walker which can only be done well in C++ (pointer-to-pointer work in C is possible but looks really ugly; I have also seen table-walkers in C# and Java but those languages are out of scope on this project). Anyway, this time I used HP C++ V7.3-009 for OpenVMS Alpha V8.4 and discovered the resulting source code was smaller and beautiful. Not sure if I will ever be granted time to rewrite the "working" C-based gSOAP apps into C++
This thought continues below: Epiphany-5
Not much to say here except this: where ever you find C/C++ you will also find UNIX® (the trademarked name), Unix (the name of this technology), and Linux
As mentioned above, Bell Labs created the "C" programming language with the intent of squeezing the bugs out of Unix. In case you haven't been paying attention, Unix is now only written in "C" which may leave you with a chicken-or-egg thoughts
After the US government finished (1983-1984) the breakup of Bell system, AT&T (no longer a monopoly) inherited Bell Labs then then attempted to turn UNIX into a marketable commercial product
comments:MIT lifer, Richard Stallman, tried to get around the commercialization of Unix by creating the GNU Project (Gnu Not Unix) which was a total Unix rewrite. Since writing OS applications is a whole lot easier than writing a kernel, it shouldn't be a surprise to anyone that GNU wasn't entirely free of Unix until 1992.
Engineering students, specializing in both hardware and software, had studied Bell Labs Unix kernel "source code" for years and were now worrying about the legality of this practice. Many universities began to look for alternatives and I remember the MINIX kernel (from the "Free University" in Amsterdam, Netherlands) being a popular contender. I might even have a hardcover manual stashed away someplace in my home office.
I sometimes wonder what is in the Scandinavian water supply because:
Today, the merger of the Linux kernel with GNU programs is simply referred to as Linux although some prefer the alternative GNU/Linux (see: GNU/Linux naming controversy)
There are already huge volumes of web information available about Linus Torvolds so let me include one quote from his bio found here:
In 2003, Torvalds left Transmeta to focus exclusively on the Linux kernel, backed by the Open Source Development Labs (OSDL), a consortium formed by high-tech companies, which included IBM, Hewlett-Packard (HP), Intel, AMD, RedHat, Novell and many others. The purpose of the consortium was to promote Linux development. OSDL merged with The Free Standards Group in January 2007 to become The Linux Foundation. Torvalds remains the ultimate authority on what new code is incorporated into the standard Linux kernel.Wow, that is a lot of corporate support (critical mass?).
According to www.archive.org the site www.osdl.org in 2003 mentions these partners (alphabetical order): Alcatel, Cisco, Computer Associates, Dell, Ericsson, Force Computers, Fujitsu, HP, Hitachi, IBM, Intel, Linuxcare, Miracle Linux Corporation, Mitsubishi Electric, MontaVista Software, NEC Corporation, Nokia, Red Hat, SuSE, TimeSys, Toshiba, Transmeta Corporation and VA Software.OSDL is now shut down and everything is referred here: http://www.linuxfoundation.org/ but their corporate member list is still impressive.
Back in 2005, Google wanted to put their Google Talk app on Apple's iPhone (via the iTunes store) but Steve Jobs refused because the app would allow people to make free long distance calls via the internet (Jobs was certain this app would cause problems with one of the iPhone's main financial backers, "Cingular Wireless", which was a division of AT&T.)
In 2006, Google made an ultimatum to Apple: either allow Google Talk to be placed on the iPhone or we (Google) will produce a competing product called the gPhone.
Apple refused which caused Google to purchase California Linux vendor Android Inc. Google then created the Open Handset Alliance where member companies would be given the Android OS Software for free provided the manufacturer preset customer modifiable preferences to do searching at Google (where Google makes most of their money).
There isn't much difference between gPhones and tablets (other than the screen size) so it should be no surprise that most tablet manufacturers would power their devices with Android. (er, Linux). Other emerging operating systems, like Chrome OS (which is currently only found in Google's Chromebook) and Firefox OS are also just different Linux variants so you can see that Linux is everywhere.
Back in the late 1980s, I found myself, once again, in the Field Services Lab (Training Center) of Digital Equipment Corpoartion at 12 Crosby Drive, Bedford, Massachusetts. We had lectures in the morning and lab assignments in the afternoon. I was assigned system W4 (Isle-W Bay-4) which happened to be a VAX-8550. While I was working on this system I noticed visitors occasionally walking through a curtained-off area in Isle: X. During our coffee break I mentioned this to my instructor who told us that the system hiding behind the curtain was a VAX-6000 which featured a new optional circuit board capable of vector processing. He further explained that vector processors were all the rage in various kinds of scientific computing like "computing particle trajectories" or "climate circulation models" because they could perform a single instruction (e.g. multiply or multiply-and-accumulate) on multiple data points. Those data points can represent anything you wish including a location in three dimensional (or higher) space. In those days, "vector processing" was available as an expensive option ($$$) but today it is built into all modern CPUs although most people are not aware of it.
comments: we were in the field lab reserved for DEC employees because a recent rain storm had flooded the customer lab. This place so large that it was impossible to see the far walls. When I mentioned this observation at coffee break the next day, one American Field Engineer said "this place is nothing compared to the NSA which hosts computer systems by the acre (that's 0.405 hectares for non-Americans)"
This side of y2k, modern "graphics cards" employ 1000-3000 streaming processors so that numerous vector/tensor operations may be executing in parallel. On top of that, If you also remember that graphics cards typically have between 1 and 4 GB of private memory then you come to the realization that graphic cards actually provide a private protected computing environment within your computer platform. Originally, cheap graphics cards only supported single precision floats while many today now also support double precision floats. In fact, some computer engineers look upon graphics cards as an array of several thousand floating point co-processors (think: several thousand 80387 co-processor chips).
Going even further, specialty companies now produce motherboards which can simultaneously host four, or more, graphics cards. Meanwhile, companies like Nvidia also manufacture graphics cards which do not have any monitor connectors because they are only used for number crunching.
Here's a brief snapshot of vector processing development:
Then CISC and RISC vendors began adding vector processing instructions to their CPU chips which blurred everything:
Development over the decades:
To learn more:
In the early 1990s, Microsoft was smaller so was "looking for problems to solve" and "markets to expand into". Since many people were attempting to develop computer games, Microsoft informally aligned itself with SIGGRAPH to help produce tools. Next, they offered to do a free port of the game Doom (which only ran on DOS) to Doom95 (for Windows95) just to develop skills. Their first Graphics API (application programming interface) was named DirectX and appeared in 1995 for Windows-95 and 1996 for Windows-NT4.
DirectX is neat because it defines a number of hardware abstractions in software (including a reference graphics card) then replaces those software devices with hardware when compliant hardware is present. This means that game programmers do not need to worry which CPU, or GPU, is present. Just send your commands to DirectX and it will carry out your wishes.
While recently poking around a game programmer site, I noticed this caveat:Microsoft recommends you call DirectX directly from Visual-C/C++ or indirectly from a .NET wrapper. Doing direct calls will result in the fastest code possible.
I have used Microsoft Visual Studio for a few corporate projects but am no expert. I was always under the impression that you could set the build-options of all Visual Studio languages to produce either "x86-binary for Windows" or "MSIL for the .NET framework". So is it possible that DirectX expects to be called from C/C++ for some reason? I am not certain but I do know is that COM (component object module) is the basis for other Microsoft technologies and frameworks, and that COM is written in C++
Here is the opening paragraph of the Introduction from the book "Introduction to 3D Games Programming with DirectX 11". (which I highly recommend to programmers)
quote: Direct3D 11 is a rendering library for writing high performance 3D graphics applications using modern graphics hardware on the Windows platform. (A modified version of DirectX 9 is used on the XBOX 360.) Direct3D is a low-level library in the sense that its application programming interface (API) closely models the underlying graphics hardware it controls. The predominant consumer of Direct3D is the games industry, where higher level rendering engines are built on top of Direct3D. However, other industries need high performance interactive 3D graphics as well, such as medical and scientific visualization walkthrough. In addition, with every new PC being equipped with a modern graphics card, non-3D applications are beginning to take advantage of the GPU (graphics processing unit) to offload work to the graphics card for intensive calculations; this is known as general purpose GPU computing, and Direct3D 11 provides the compute shader API for writing general purpose GPU programs. Although Direct3D is usually programmed from native C++, stable .NET wrappers exist for Direct3D so that you can access this powerful 3D graphics API from managed applications.
DirectX is a collection of other modules. Direct3D and D3DX (Direct3D Extension) are two of many. D3DX is a math library capable of doing math in three (or more) dimensions to support 3d video games but some programmers used D3DX to do scientific work. This led Microsoft to develop XNA (unofficially: DirectX-Nextgen-Architecture) which is a better vector math library.
game vs. non-game
Early in 2013, Microsoft announced that DirectX and XNA will both be folded into Windows-8 and will only be available as a Windows Kernel Service. Oops! Scientific application developers have been told to use move to DirectCompute (but many will move to OpenCL or CUDA)
Most people do not know that the first "X" in XBOX represents DirectX. Yep, the XBOX-360 run a modified version of DirectX-9 (despite what you have read on the web, nothing higher).
Many people do not know that the XBOX-360 is powered via a tri-core PowerPC chip from by IBM rather than an x86 chip from either Intel or AMD.
Now I guess it is no surprise that DirectX is written in C/C++ and is just compiled differently to generate code for different target processors (game console or Windows PC). Doing this in a non-portable language -or- macro assembler would be too labor intensive as well as bug prone.
A few months back (May of 2013) I was trying to learn more about parallel programming so was reading a book titled “CUDA Programming: A Developer's Guide to Parallel Computing with GPUs ” where the author gives evidence that any high-end desktop today (2013) with multiple graphics cards (if your motherboard supports them) can out FLOP anything found at the top of www.top500.org twelve years ago in 2001. Wow! Who knew? One restriction here is that the CUDA technology is only available in C/C++ as a bunch of included libraries. Sure I was aware of vector instructions in VAX and Alpha CPUs but these appeared to only provide pseudo-parallel programming capabilities. But graphics-cards from NVidia and AMD/ATI often provide several thousand streaming processors which are available for whatever you wish; this is true parallel programming on the desktop. You needed CUDA to talk to these cards when you wanted to do math but later discovered that lots of people were using the huge amount of vector math libraries created for DirectX/Direct3d as well as OpenGL. Apparently all the modern games would not be possible without these libraries. Talking about DirectX/Direct3d for a moment, I’ve visited a few of the game programmer sites where most people say “Microsoft allows direct communications with the DirectX/Direct3d API’s from Visual-C++ but for all other languages you need to go through a .NET wrapper (which reduces performance)”. Oops! Another plug for C++
I can only recall three interconnecting technologies that made a large contribution to the computing industry
I hadn't given much thought to clustering or parallel software on microcomputers until I received this recent (2013) advert from Intel for two products:
These products were designed to plug into the Microsoft Visual Studio IDE (Integrated Development Environment) targeted at Windows or Linux. However, after visiting the Intel site on 2013-07-20 it appears that these Windows-based tools now only generate code for Linux targets. I'm not sure if a windows flavor is around the corner or not.
This thought continues below: Epiphany-19
The Last of Us - Is a movie-quality experience about future life after a biological holocaust. In part of the game, YOU play the roll of Joel who is traveling across a post-apocalyptic United States in 2033, in order to escort the young girl, Ellie, to a research facility where it is believed that Ellie may be the key to developing a vaccine. When Joel and Ellie become separated, YOU play the roll of Ellie for a time.
Grand Theft Auto V - is played from a third-person perspective in an open world environment (translates into approximately 49 square miles or 127 square km) allowing the player to interact with the game world at their leisure. The game is set within the fictional state of San Andreas (based on Southern California) and affords the player the ability to freely roam the world's countryside and the fictional city of Los Santos (based on Los Angeles). The single-player story is told through three characters whom the gamer switches between to move the story along.
Everyone reading this will have their own examples. My first memories involve VAXclusters which consisted of multiple VAX computers running the VMS operating system. They could be tightly coupled through a common memory interface, or medium coupled through network communications. Applications were programmed in such a way that the loss of one of the computers does not cause the loss of any storage or transactional data. In fact, the recommended way of performing an OS upgrade was to roll one computer out of the cluster, do the upgrade, roll the computer back into the cluster then repeat the operation on the next VAX.
32-bit VAX evolved into 64-bit Alpha which meant that this technology was referred to by the lesser known name VMS Cluster. Improvements allowed the distance between clustered processors to increase, and such a cluster could be seen in operation during the 9/11 attacks on New York when one VMS Cluster processor was destroyed with one of the twin-trade towers while its partner in New Jersey continued transactional processing without dropping a single transaction. (not something any company with a conscience would want to advertise)
specialized computers
protocol | function | description | how high in the cloud? |
---|---|---|---|
dns bind |
domain name service berkley internet name domain |
translates names into I/P addresses | high |
smtp | simple mail transfer protocol | email OUTBOX | medium |
pop3 | post office protocol | email INBOX | medium |
http | web server | transfers (usually html) formatted data | low |
Most technical people know that protocols like telnet and ftp are connection oriented, and that connection stays up until it is terminated by the client (via user command) or server (timeout). Most people do not know that http (the protocol to support www and/or web) is connectionless. Yep, you read that correctly; Before Y2K, a browser opened a connection to a web server, retrieved a page of data then closed the connection. If there were multiple pictures on the page, a separate open-close transaction was necessary for each one.
caveat: what I am describing here is HTTP/1.0 which still works that way. A second newer protocol called HTTP/1.1 was added in 1999. The keep-alive feature of this protocol keeps the TCP/IP connection open for a short time (programmable by the server) while the client makes multiple requests of the server.
http: | non-secure web transactions (usually on port 80) |
https: | secure transactions usually on port 443 using (https = http with security) |
One programmer, or perhaps it was a team, wanted to triple encode session keys in SSLv3 and so wrote "C" routines to do so. But made a mistake in the declaration of one "C" variable making it a "long" (32-bits) rather than a "long long" (64-bits). This had the effect of reducing the resultant key space to 25% which would be easier to crack. No one knows how much of this open-source code mad it into other products.
Enter TLS1
The IETF improved upon SSLv3.0 and might have called their new protocol SSLv3.1 or SSLv4.0 but, since they did not want people to continue to use the old libraries or even accidentally link against them, they named their new protocol TLSv1.0 (Transport Layer Security). They also modified the calling structure to prevent accidental linking to old libraries. Security improvements continued with TLSv1.1 which morphed into TLSv1.2
ConclusionsNot only has security allowed consumers to securely purchase goods from online sites using credit cards and/or PayPal, it has allowed many companies to put their corporate records into computers located in the cloud. One neat feature of cloud computers is their ability to automatically backup data to other cloud computers located around the world. Companies would only do this if it was secure.
Almost anyone alive today will recognize that Oracle has been very successful in getting their flagship database product connected to communications networks, including the internet. The following timeline may shed some light on Oracle's view of these phrases.
Product | Year | Comments |
---|---|---|
Oracle 8 | 1997 | |
Oracle 8i | 1999 | i = internet (implemented by incorporating a built in JAVA VM) |
Oracle 9i | 2001 | |
Oracle 10g | 2003 | g = grid computing |
Oracle 11g | 2007 | |
Oracle 12c | 2013 | c = cloud computing |
video cards
solid matter displays
3d display technology
cloud computing
That last item should raise a few eyebrows for several reasons:
In 2013, Microsoft released the XBOX-One and Sony released the PlayStation 4 and both platforms are based upon an 8-core Jaguar APU manufactured by AMD. What's an APU? It is a CPU (central processing unit) and GPU (graphics processing unit) into one chip (or chip carrier). I do not need to point out that 8-core chips are not yet available in the retail market. Video consoles still lead the way
In 2016, Intel released a new core-i7 desktop processor featuring 10 cores (this extreme edition was aimed at the gaming community)