Business, at the speed of trust
I grew up in the picturesque villages of South Orange and Maplewood, New Jersey. The pair of villages were but a short vintage train ride from the quaint suburbs into midtown Manhattan. My family owned a small construction business in town. In the Summer my father worked us hard - harder than the rest of his crew to show no favoritism and teach us the value of work. He wanted to instill in us an appreciation for the higher education he couldn't have. After that, being in the classroom provided much-welcomed relief; maintaining top grades was easy by comparison to laying thousands of feet of cobblestone curb and hot asphalt as we built roads in the sweltering summer heat while our hands sometimes bled. But it also felt good after a long and grueling day.
Music, top 40 hits, and Grammy awards weren't foreign in my immediate family. The youngest of three brothers, I was in awe of the musical creativity all around me. All three of us wrote and played music, but my talents paled in comparison to my older brothers whose multiple platinum and gold records now adorn the walls of my home office. You'd often find us in the basement of a rockin' victorian-era house located on a quiet maple tree-lined street. The band could be heard more than a block away. To this day I continue to write and play some but still haven't sold a single song. I still play for pleasure and do occasional open-mic nights - as long as they don't give me the hook; you'll never find me remotely qualifying for The Voice or Amerian Idol - maybe Ameican Idle would be more like it as the musical gene only bruhed me and was passed on full-strength to my daughters.
A lifelong love of making images
As both of my brother's songs consistently ascended up the Billboard song charts, my talents lay elsewhere. At age twelve I developed and printed my first roll of film; I was hooked for life. Fifty years hence, image-making remains my first love, and although I've been making images digitally since the late 90s I still shoot, develop, and print film on occasion. Countless thousands of frames since my youth led me to major in Photography at Bard College before changing to Communications and Journalism at Marist, a small college situated high on the banks of the Hudson River across from the Catskill Mountains only a few hours north of Manhattan. Until recently I had a brick-and-mortar gallery. I continue to maintain a full-scale traditional color and black & white darkroom and a professional portrait studio. I shoot mostly 6x7 and large format 4x5 film in addition to high-end digital. Photography remains my escape and avocation. I take on occasional commercial assignments producing massive gigapixel murals for indoor visitor centers up to 30 or more feet in size.
All that glitters is not gold
My first job straight out of college was working for the American Broadcasting Companies at the ABC headquarters and broadcasting studios located in mid-town Manhattan. The broadcasting studios were located on Central Park West close to the Metropolitan Opera House near Columbus Circle. I worked on the sets of Good Morning America, Good Morning New York, ABC News, soaps, and many more. My job as Guest Relations Representative - complete with the ABC-TV jacket and signature red tie - was to accompany all visiting dignitaries and famous celebrities before and after their on-set appearances for live and recorded broadcasts. During daytime hours I was often assigned to the penthouse offices of the ABC headquarters located on West 54th and the Avenue of the Americas assisting the CEO and division presidents. The sheer number of famous guests with whom I spent substantial time and long one-on-one conversations is too long to list, but it left an indelible mark as one might imagine. I also developed a fair clientele of celebrities for whom I did promotional imaging on the side. Although I wanted to become a news producer at the time, I saw the dues paid by many and decided that such a career path was not aligned with my goals, so I declined an opportunity and instead began graduate studies in computer programming and Management Information Systems back at Marist while seeking a technical career.
Introduction to the mainframe and microcomputers
While at Marist I became intrigued with computers and began graduate studies in computer programming in the Masters of Information Systems program there. While roaming the halls during one Summer session I spied a poster advertising a Co-Op position at IBM in the photography and media production department. Needing income to support myself I applied and began what morphed into a 38-year career at Big Blue.
The Co-Op position turned out to be far more than taking pictures. We made thousands of images of course - from massive white-rooms full of shiny new mainframe computers to micrography of bacteria growing on-chip substrates. However, I became keenly interested in programming a small AVL microcomputer that was used to sequence sixteen slide projectors to create massive heart-stopping, forty-foot-wide Hollywood-class rear-projection visual multimedia extravaganzas. The productions were synced to concert-scale, ground-rumbling sound systems. The spectacles were used during major corporate events attended by thousands of employees. It was the pinnacle of multimedia production before the advent of the personal computer.
When the company decided to farm out visual operations, I was asked what I wanted to do next. My manager, aware I had a mixed background in journalism, communications, and computer science, connected me with the technical writing department called Information Development at IBM. Soon after I began the core of my forty-year career as a content professional that started out by writing five-pound manuals for mainframe operating systems including VM and MVS - now zOS.
I was not foreign to technology. Prior to my Co-Op position, I had worked for a year at IBM's massive data center in the sprawling chip manufacturing facility located at IBM's East Fishkill plant. I managed payroll processing for the ten thousand employees weekly - sorting and processing cases of nine-edge punch cards using an ancient IBM Model 083 punch card sorter and punch card readers. While doing so I wrote data center 'run' books to train other operators. As the saying goes - the writing was on the wall.
A new career as an IBM Information Developer
My new job as a junior Information Developer intrigued me. I was no longer tied to a typewriter and could write using a shiny new IBM 3720 computer terminal tethered to an IBM VM mainframe and the Conversational Monitoring System (CMS) at IBM's main programming lab. The programming complex was nestled among apple orchards at IBM's Myers Corners Lab in the Mid-Hudson Valley not far from where I attended college.
My very first task as a junior writer was to re-write user guides and encode them electronically using a new dedicated publishing tool called Information Structure Identification Language (ISIL) based on generalized markup language (GML) which had been invented at IBM. Little did I know then to what that tango with GML would eventually lead.
I became a prolific writer, with productivity that far exceeded several of my senior peers, some of whom were resistant to change. I used all the advanced technology at my disposal and typically finished my assignments far ahead of schedule - giving me time to play and experiment with new technology such as writing text automation macros and utilities that sped up my work more.
Dawn of the IBM Personal Computer
This was also the dawn of the IBM Personal Computer. I was lucky and received one of the very first units that rolled off the manufacturing line. In a mainframe shop where such a device was foreign, they said, "give it to the kid" and asked me to evaluate the machine and software and report what I had learned. As I prepared to give my review, the little beast sat on a long executive mahogany conference table in my lab director's conference room. I gave my report surrounded by a cadre of managers. I gave them a one-line summary: "Our IBM 3081 mainframe had a baby, and I predict it is going to eat its mother." which elicited hearty laughter. Only a few years later mainframe sales plummeted due to companies investing in microcomputers - almost driving Big Blue out of business; no one was laughing.
A system builder for life
I vividly recall that sunny May 1983 afternoon; it was the day on which I purchased my very own IBM Personal Computer. It was an IBM 5160 PC/XT with a whopping 10MB hard drive, an IBM monochrome display, and an IBM 4101-001 dot matrix ProPrinter. It cost me just north of $3,000 from an IBM retail store in Albany, NY, and that was with an employee discount. I believe the list price of an IBM PC/XT back in 1983, with a color monitor, and printer, was somewhere around $8,000 with a typical retail price closer to $5,000.
In the ensuing years I enhanced it, component-by-component, plugging integrated circuit (ICs) chips into green printed circuit boards (PCBs), setting jumpers, and wrestling with memory address conflicts and hardware interrupts (IRQs). Remember, PCs back then didn't have visual CMOS-based BIOS setup utilities like modern PCs do today - you had to configure hardware components manually using rocker switches on the motherboard. This quickly became an expensive and time-consuming endeavor that would never end. I've been building new systems ever since, so I guess that makes me a system builder going on more than 35 years and counting.
Building and modifying PCs became another one of my life-long loves. At home, I treasured my precious IBM 5160 PC/XT, and continually enhanced it over the years. Just about everything I ever needed to fundamentally learn about personal computing I learned on that machine. I was no novice at computers, even back in the early '80s when the IBM PC emerged. My prior IBM training and work with Assembler/370, PL/1, APL and JCL mainframe programming in an MVS mainframe operating system development lab provided a solid foundation.
However, as life progressed, things changed. I was horrified when I discovered that my beloved PC/XT had been given away in a garage sale before I could retrieve it. It didn’t matter that I had customized countless systems since then - you never forget your first love, as the saying goes.
Thus you now know why, decades hence, I decided to resurrect the original Big Blue Beast – only better. Call it nostalgia, call it retribution, diagnose it as you will, but alongside my modern, massive Intel i7-3930 six-core behemoth, sits The Big Blue Beast, a reconstructed IBM PC/XT Model 5160 - on steroids.
I also love a challenge. "It just can't be done" was the response when I wanted to build a multi-OS computer that can run everything from DOS all the way through current Windows. I was told by multiple "experts" including a popular magazine editor that it wasn't technically possible to build one. It’s not a virtual machine mind you, but a single physical machine that can run everything with full function for each operating system. The goal was to build a multi-boot wonder, one that didn’t need to swap out hardware components between booting-up each OS - and I did.
The birth of practical hypertext and electronic books
When I got my hands on my very first IBM PC with a whopping 16KB of RAM and floppy disks I explored some of the earliest home-grown PC-based text processing apps. A colleague and I got our hands on a tiny character-driven, home-grown demo program called Hype. It was arguably one of the very first PC apps that could create text with working hypertext links. This was the early 1980s; hypertext, HTML, and the web wouldn't debut until a decade later in the early '90s. We were mesmerized by creating the simplest of links and talked about how it could change the world.
At the time IBM had developed some of the earliest electronic book technology called IBM BookManager. BookManager complimented GML which by then had been become a mainframe software offering called IBM BookMaster. A colleague of mine subsequently developed a PC-based version of both the BookManager book builder and reader to which I had early access.
I was handy with programming by now. IBM, in its infinite wisdom, decided that all Information Developers writing about mainframe operating systems needed to go through the same intense systems software programmer training as the full-time engineers. As a result, the writers had to take batteries of months-long, full-time training at the IBM Programming Development Center. You either passed every grueling programming exam or you agreed to resign. That wasn't easy for writers - most of whom had degrees in English, history, and other liberal arts. The programmer hires were mostly valedictorians from the nation’s top technical programs; they all could read five hundred page computer dumps in Hex as easy as reading the Sunday paper as we debugged our assembler code. They gave us writers funny looks when we queried why registers were loaded the way they were; engineers didn't question why, but we were journalists learning how things worked which was central to our craft. The only problem with training right-brain creative types as software engineers is that it made some of us quite dangerous.
Structured content - and an audacious idea
I hadn't lost my love for media production. After my old media production department shut down, I was able to abscond with a large cache of professional video production gear including three-quarter-inch video editing decks, controllers, monitors, laser disc decks, and more. My manager at the time didn't object to me installing the gear in my generic corporate office which was situated in the middle of the sprawling programming lab. You see, I was setting up the first of many human factors labs in the company and had access to the earliest experimental digital audio and digital video adapters for the IBM PC.
While tinkering with hyperlinks in electronic books I imagined how cool it would be if we could have audio and video animations embedded in the electronic book text as hypertext links. I was able to launch an embedded media player to the amazement of my colleagues who urged me to write up a formal invention disclosure and pursue a patent; so, I did.
Remember, this was the age of the mainframe where personal computers were new and viewed as mere toys. For most they were little more than fancy mainframe terminals. One day a gentleman named Dr. Charles Goldfarb showed up at my office to talk about my invention disclsoure which was based on GML. Little did I know that he had invented document markup language years earlier with two other IBMers Mosher and Laurie (get it? Goldfarb, Mosher, and Laurie- G.M.L.).
Charles was gung-ho on pursuing a full patent filing, but the mainframe-biased legal-eagles thought it provided little material value and instead opted to go for the formal Publish that would make the invention public domain. The lawyers put the invention disclosure formally on file in the US Library of Congress and published it in Big Blue's prestigious and widely-read IBM Journal of Computing. To date, there has been no earlier disclosure or patent like it on file. It would be another year or two until the Mosaic web browser, HTML, and the web would make its debut. One might only imagine if that disclosure had been filed and granted a patent.
My work had led me to build the first full-scale multimedia production lab at IBM. I led the development of similar labs throughout the company. I also ran small conferences for leads from across the company and developed technical and design standards for multimedia content production. On a break during one of those meetings, I was cornered by two key colleagues - Dr. Goldfarb and Elliot Kimber, another early SGML pioneer. and office mate. They convinced me to re-base my work from GML to something called Standardized General Markup Language (SGML); my conversion to the Dark Side was now complete.
The ID Framework and ID Workbench
Due to departures, I was one of the few folks left in the development lab that understood SGML. As a result, I was asked to lead the development of IBM's first SGML content management system called the Information Development Framework. It was a client-server architecture that would morph to become what was called the ID Workbench based on a proprietary SGML dialect called IBMIDDoc. The original authoring tool was based on Author/Editor from SoftQuad, which was eventually replaced by Arbortext Editor and eventually Oxygen. The transformation engine was based on Xoterica, which later became Omnimark after it was bought by Stilo Inc. For the generation of multi-lingual postscript, Xyvision XPP was used and then distilled by Adobe Acrobat to generate PDFs. There wasn't a CMS initially, but later IBM acquired FileNet which we extended to natively manage DITA source content. For content delivery, the opensource Eclipse Help System (EHS) was extended by our team and was known internally as IEHS. IEHS was later replaced by a massive-scale delivery head called IBM Knowledge Center that persists to this day and supports tens of millions of pages of content with millions of visitors weekly.
Years hence, the ID Workbench would make the transition to XML. It was the first platform to implement the Darwin Information Typing Architecture (DITA) invented by our team. We extracted the core out of the ID Workbench, built the DITA Open Toolkit, and made it freely available on Sourceforge - which was a key element in making DITA a successful open standard that spawned an entire industry.
The technology for building the structured content supply chain wasn't the difficult part. The content corpus consisted of millions of GML source pages strewn across every geography. The GML wasn't pure; it was mixed with SCRIPT/VS processing macros that complicated conversion from GML to SGML. Document migration would be a daunting undertaking and no one wanted to do it, let alone lead the global effort. A strong believer in intelligent structured content and SGML, I stepped up to own and lead the global conversion mission.
It was every bit the challenge and moreso than anyone feared. To say it was painful for the Information Development teams would be a significant understatement. We enlisted three structured document conversion vendors, qualified each, customized the conversion scripts, wrote extensive conversion guides, and managed the conversion site-by-site.
There were doubts and debate at the leadership level whether it would work and was worth the impact. Productivity loss during migration and the pain being inflicted on the writing teams was intense. I took a risk and laid my career on the line by committing to the leadership team. I traveled the globe with my small team teaching ID teams how to handle the conversion and write in SGML. It worked, and our team was recognized with a highly-coveted Director's award.
DITA - putting the X in XML, and the industry that almost wasn't
DITA almost didn't happen. I was present at the SGML '97 conference in Boston where the XML 1.0 specification was announced with great fanfare on stage by the likes of Jean Paoli, Jon Bosak, and Tim Bray. Up until that point, there had been some work on a lite version of SGML for the web; it was coined Monastic SGML by participants. The XML 1.0 spec incorporated many of the objectives of monastic SGML.
When I returned from the SGML '97 conference I immediately began to evangelize XML throughought IBM. I published an extensive white paper about XML that was widely circulated internally and I presented to then CEO Louis Gerstner, in person. I was convinced that the next generation of our publishing platform needed to be based on XML. However, a new SGML DTD, called WebDoc, was already in the queue to replace the book-oriented DTD IBMIDDoc. The senior corporate project manager insisted we build the next generation platform on WebDoc - I vehemently opposed that plan.
A pitched battle ensued that became quite public internally. In the end, I prevailed and convinced my immediate manager to form a workgroup to develop an XML implementation instead. The workgroup, convened in the late '98 timeframe, consisted of 10-12 folks who spent an intense year-plus developing DITA. I consider two of my colleagues in particular, Michael Priestley and Don Day, as the thought leaders and primary inventors of DITA. I built a new platform to put DITA into production. A new corporate manager joined the team on occasion halfway through the workgroup effort and as a result, only a handful of the workgroup members were named on the DITA patent; the rest of us received T-shirts with the new DITA logo as a thank you. A few years later DITA became the most successful OASIS standard in history. Had I not prevailed and formed the workgroup that created DITA, the entire industry as we know it might not exist.
Twenty more years leading the development of multiple generations of content supply chains
I went on to lead the design, build, and management of multiple generations of structured content management systems and supply chain at IBM. The platforms provided an immense amount of reuse and repurposing and omnichannel content delivery to the tune of efficiencies amounting to more than half a billion dollars in direct verifiable savings. At its height, the intelligent content supply chain supported 1500 full-time Information Developers supporting more than 3500 IBM product offerings and 60 million multi-lingual pages - the majority of which were generated from a source content corpus that was exponentially smaller.
Along the way I worked tirelessly with dozens of tools and services providers both as a collaborator and a customer. I saw my role as a product development team lead (PDTL) as the number one advocate for content creators and worked tirelessly to find and build automation and tools that made content creation and management easier for professionals and causal contributors/SMEs.
One of the projects I initiated was the design and creation of an all-visual DITA authoring tool called AuthorBridge. None of the dozens of providers that had authoring tools were interested in cannibalizing their flagship authoring tool to provide a walk-up and use DITA editor that was as fluid as authoring with Word and guided the content creator to remain conformant to strict and consistent topic content models – and do it at a fraction of the cost. The intent wasn’t to displace power-user authoring tools, it was driven by the need to support growing numbers of casual contributors and SMEs that needed a DITA editor only on occasion. The cost of editors for power users was far too steep to justify their expense and the populations of SME contributors were skyrocketing. During a casual discussion with a long-time provider of transformation software and conversion services, Stilo Inc. the topic of a gap in the marketplace came up. Stilo, not having an offering in the editor space agreed to collaborate with me and build an editor that didn’t require any knowledge or direct interaction with DITA or markup whatsoever.
In the mid-2000’s I wrote an extensive internal white paper for highly automated and continuous content localization, at scale. That eventually led the development of a highly-successful implementation after transitioning the entire supply chain to a web-services-based model.
Expert systems, AI, and cognitive content and deep learning - an unlikely journey
My first experience with artificial intelligent dates to the early 1990s when I worked with ADP on what was then called Expert Systems – an early incarnation of rules-based AI. I was so intrigued that I began working directly with a pair of IBM scientists at IBM Watson Research in Hawthorn, NY that specialized in computational linguistics. The team had developed text analysis technology called EasyEnglish Analyzer that could identify incorrect grammar and writing style and suggest corrections. Again, it was rules-driven and not based on machine learning, but I had my content platform team integrate it. Some of the work of that Watson research team made it into the foundation of what would become IBM Watson.
This was at least a decade before computational linguistic tools like Acrolinx would even be developed. My user base included more than a hundred full-time editorial staff that clamored for more editorial automation. I became aware of Acrolinx from Acrolinx GmbH in the mid-2000s but was barred from pursuing it due to misinformed biases. In the later 2000’s I had the opportunity to fully explore such solutions after a change of leadership and went on to implement assistive computational linguistics rules-based AI for more than five thousand users across every content domain in the enterprise. I became a certified Acrolinx engineer and collaborated with the company to design and implement major services including Findability (SEO) assistive analytics, metrics reporting dashboards, legal trademark auto-identification, terminology extensions and more. The collaboration helped scale the platform; most all the features made it into the commercial offering. One that sadly didn’t was an extension to the in-editor interactive sidebar that brought short descriptions with links to other related content that resided in other content silos based on AI. I presented and demonstrated a fully operational model at the CIDM conference in 2019 to demonstrate that AI and cognitive retrieval were indeed practical and achievable.
This was also the age of IBM Watson that gained global notoriety with its use on the Jeopardy game show pitting human against machine. The machine won hands-down. Back at Big Blue most everyone was required to become knowledgeable about IBM Watson and machine learning; some of us went further and became semi-fluent, although in retrospect there were many mysteries that could have been more easily explained.
I was more than a little intrigued having had prior experience with pre-ML AI based on Expert Systems and an even deeper understanding of computational linguistics AI. I began asking pointed questions about how AI could be applied to user assistance and help content beyond AI-driven bots.
One truth was certain: enabling AI and deep learning for content was much more doable using structured content than unstructured content. Every Watson expert with whom I consulted confirmed that fact. It was then I knew that everything we had done since the early 80s with structured content based on a document object model (DOM) was as forward-thinking and essential to successful implementation with AI and deep learning strategies for content delivery at scale.
In June of 2020, I retired from IBM after 38 storied years. Over more than three decades our team's visionary work had catapulted Big Blue to the pinnacle of the craft as industry leaders. Having recently built a new home in the mountains of Western North Carolina and a looming back-to-the-office mandate after working remotely for twelve years it was time to declare victory and pass the baton to our successors and gracefully part ways.
After taking a few months off to finish building our new home in the mountains of Western North Carolina, I accepted a new role as the Senior Director of Content Platforms at Avalara. Avalara is a provider of cloud-based automated business and tax compliance software. Avalara made the strategic decision to implement an intelligent content strategy based on DITA. I am now building a next-generation intelligent content and intelligent content supply chain with the support of a high-talented, highly-motivated, and visionary team.
I now reside with my wife Ellen overlooking the majestic Blue Ridge Mountains near Asheville in Western North Carolina. We have three wonderful and talented adult daughters.