Copyright © 2014–2017,2019 by Thomas E. Dickey
Herewith, I give a brief history relating how (despite a promising start), I stopped producing grist for the academic treadmill, and focused on a more productive activity: developing interesting computer programs. (I offer my apologies for the mixed metaphor, in case anyone noticed).
I was first introduced to the subculture of papers (and presentations) as a graduate student.
At the time, I was developing microprocessor assemblers and simulators. A student of Granino Korn's had published a paper in the October 1975 issue of IEEE Computer magazine which was essentially a simple cross-assembler written in BASIC, with some discussion to make it a publishable paper. My advisor brought it to my attention, and I typed the program in, finding that it had several errors which would prevent it from running. I pointed out that unlike my work, that was a single (monolithic) program with no reusable code.
To my advisor, the fact that the published program did not work was irrelevant; the point was publication. My advisor saw an opportunity at a nearby symposium to publish my work:
A Microcomputer Development and Evaluation Tool, C.P. Neuman and T.E. Dickey,
Trends and Applications 1976: Micro & Mini Systems
May 27, 1976
National Bureau of Standards
At that point, I had written about 27,000 lines of code on my project (which eventually grew to about 85,000). My advisor and I had some differences for this paper:
I sent a corrected abstract to the program chairman advising of the changed name, the reason why, and of course that I would not participate if the change was not accepted.
Around the same time, I endured a "candidacy" for the degree. This is a practice run for the defense, for which I prepared by documenting (in several pages) my proposed research, the methodology used (as well as justification for the various aspects). Not being prepared, I had supposed we would talk about that. We did not. Abrahim Lavi (the senior faculty member of the group) had me go up to the chalkboard and begin working out a design for a feedback control system. This was fairly routine material—for a master's student. However, Lavi kept pouncing on pauses in my replies, apparently looking for a misstep. After 45 minutes of this exercise, he asked a question along the lines of “now what do you expect to happen if you make (this) change?”
I stumbled, starting to say “I think” This was what Lavi was waiting for (I can see that the process had a clearly defined goal). He said:
“Stop right there.” (I did, of course).
“A PhD does not think, a PhD knows.”
After allowing time for the lesson to sink in, he concluded with “You may sit down, now.”
The point that was being made is that recipients of the degree have demonstrated that they are detail-oriented. There is no guarantee about demonstrating their native intelligence (and I have encountered enough counter-examples to see that would be futile).
Not long after that, I began working full-time at a research center. One of the last things that I did as a full-time graduate student was to design and develop an implementation of a language for describing the I/O and similar interfaces used in my simulations. I called that IDL. Though the name was inspired by “ISP”, the language was lower-level and dealt with data flows in a different manner. (A later use of the name found in Stone and Nestor's 1987 report 1987 report is likely unrelated—but spending $33 to answer that point deters me from satisfying my curiosity).
While I still had a lot of work to do (as a part-time graduate student, I worked “only” 30-40 hours per week on my research, along with 40 hours on my day job), the way in which I had allowed for bit-manipulation in IDL seemed good enough for a short paper, describing the design goals and tradeoffs. My employer was agreeable (it would not interfere with my job). I submitted the paper, and received mail confirmation that it had been accepted.
However, when I arrived at the conference, I found that something had gone awry. Though it had been accepted, at some point in the process it was lost (and not recorded). Fortunately, I knew one of the presenters (Alice Parker), who explained the issue to the people running the conference. I was allowed to make my short presentation, and got a few questions (such as where can one get the code). This was:
4th Annual Symposium on Computer Architecture,
cosponsors: IEEE Computer Society and ACM,
Silver Spring, Maryland, March 23-25, 1977
Overall, this was a more positive experience than the first.
However, I found little time for external papers for a while.
Internal research reports and memos, along with my graduate research took all of my time.
In the course of my research, there was an unresolved dispute which festered away for a few years. My advisor (against my advice) reopened this issue. My adversary took this opportunity to attack my research, saying that programmer variability was “orders of magnitude” beyond the effect which I was measuring. While the political aspect was eventually resolved (largely due to my having at hand suitable documentation to demonstrate that my approach had been approved), I was curious what basis this criticism might have in fact.
Reasoning that the clues would be found in published literature, I spent a half-dozen Saturdays early in 1981 skimming through all of the technical and trade journals at the neighboring university's library (my own had a poor collection). I found what I was looking for (published papers in this area were very rare—probably no more than a dozen), and referring to the original paper, could see the flaws. Fortunately, the authors published their raw data. This allowed me to perform a standard ANOVA (analysis of variance) on the data. My computation showed me that the criticism applied had been unfounded.
Further, having found the (apparently only) source of this statement on programmer variability, I looked further to see how it had been referred to in the literature. There were several references, which (as time passed) grew into a sizable niche over the next 12 years. Discussing this example of folklore with Gary Leive, he offered the opinion that the reason why it had not been caught was just that it sounded so reasonable.
I saw an opportunity to publish a rebuttal in the most recent of these references, and got approval from my management to do this. The result appeared here:
Programmer variability, Dickey, T.E., Westinghouse Research and Development Center, Pittsburgh, PA
Proceedings of the IEEE (Volume: 69, Issue: 7)
July 1981, Pages: 844-845
Digital Object Identifier: 10.1109/PROC.1981.12087
as well as in my completed research. There was (inevitably) a rebuttal by Curtis in the Proceedings, to which I might have replied to. At the time, I had a copy from NTIS of a report on which the 1979 paper with Sheppard in Computer was based, likely one of these:
Both versions summarized the data, but did not present the raw data—which meant that independent analysis of the data as I had done for the Sackman paper was not feasible. Along those lines, I was concerned about Curtis's comment:
Data for 6 other professional programmers involved in this experiment were deleted, since they were unable to debug either the pretest or the experimental programs.
However, I was in the process of moving on to a new company, and had no time or opportunity to do this. By the time I had become settled there, Curtis had also moved—to a nearby division of the same company. Reopening the debate would have been awkward. Rereading the rebuttal in 2014, I see a different aspect: Curtis said:
Sackman's  message that substantial performance differences do exist among programmers remains valid. Detecting a 20+:1 range ratio depends upon having one brilliant and one horrid performance in a sample. However the range ratio is not a particularly stable measure of performance variability among programmers. The dispersions of such data as appear in Table I are better represented by such measures as the standard deviation or semiinterquartile range.
This is the answer which “Tony” was lacking: some authors read the first part of that paragraph only, while others read the whole paragraph. Curtis's mention of statistical measures is a reminder that because the extremes are relatively rare, they should receive less weight. Those who focus on the extremes (as in the beginning of this section) have ignored their due weight.
I completed my research (another publication, though calling Ann Arbor Microfilms “publication” seems to be stretching the point). I produced the book in two versions using txt and unover:
My book numbers pages within each chapter; a quick check shows 131 pages in the shorter version. That makes the full version about 590 pages. Another aspect of the publication requirement was to provide a copy for the university library. I did not like the idea of producing so large a book for the shelf (it would have been one of the largest, even after I tweaked the line-spacing and margins). Gary Leive said it was unnecessary to include any source code, saying that the computer science students for instance “never publish a line of code”. Not liking that idea, I discussed it with the library staff, who agreed that I could provide a book which was printed double-sided. (A few months later, I found that they had it reconstructed as a single-sided book, which could not have improved matters due to the way the margins are laid out—but I was gone—my copies look good).
I added a quote from Sackman's paper preceding the title page, with a footnote pointing out the irony of the text which I chose. Because the publication date for my book was May 14, 1981, that makes it the first published work to point out the problem with the 28:1 range. The IEEE Proceedings letter was published in July, 1981.
As of March 2014, Google Scholar has two entries referring to this. One is a mangled entry—wrong date, uses my initials rather than full name, and the “related articles” are for some secondary references which I used, and they have the title wrong:
Microprocessor Evaluation for Process Control Tasks
Evaluation of Microprocessors for Process Control Applications
Hearing that I had decided to leave the R&D center, Carlos Baradello suggested I come where he was, at ITT. After looking at a few other possibilities, I did that.
From late 1981 until mid-1983, I was a developer in the ITT 1240 system at the ATC (Advanced Technology Center) in Connecticut, mostly in the network maintenance area. Initially, Carlos was the team lead; he moved to a different position late in 1982. His replacement was Jamshed Mulla. Management seems to have higher turnover than development.
While the end application was written in CHILL (CCITT High Level Language), the compilers were hosted on MVS/TSO (a mainframe). The interactive front-end (running VM/SP CMS) had no development tools.
I wrote some tools (about 250 programs—mostly scripts but also some assembly language). A large fraction were to extend xedit. In the process of doing this, I wrote documentation for about 160 of these programs.
I wrote most of those programs to help with writing software and documenting my work. For instance (as a late-comer to the project), I immediately took on the smallest of the network maintenance applications—and shortly after, inherited the largest of these applications (from José Pozas, a Spaniard returning to his original company). There were only sketchy notes on its design. But there was a documentation requirement for the testing phase, with the scenario of inputs/outputs. I wrote a program which took the sets of test-data, and logs from running the program, and generated the roughly 250 pages of documentation needed. Thus far, good. However, when I proposed applying this to the remaining application, Jamshed objected, saying that I should not be doing Jan Schiets's work.
Jan Schiets, by the way, was from Belgium. According to Schiets, it rained all winter in Belgium and the sun never came out (a factor in my declining the offer from the BTM people a few years later). However, Connecticut had its moments as well (see this and this).
On the other hand, my work with the prtrcvr utility helped us prepare draft documentation on the CMS system which could be sent to the word processing group, preformatted in a way which greatly reduced the amount of work they needed to turn it into “real” documentation.
According to an associate, my programs were used at other sites; however none of those were (to my knowledge) published externally.
I gave three presentations (at different conferences), and have comments on two more conferences.
Early in 1983 (perhaps in the spring), Jamshed said at the group meeting that there was a conference later in the year which was of interest, since one of the people on the program committee was in our department. That was GlobeCom '83. After the meeting, I suggested to Jamshed that if he would write an abstract for a paper describing our system and related development, I would write the paper. (He had been at ITT two years longer than I, and I wanted his input.) After further discussion, I wrote the abstract, but got him to agree to review the paper, to ensure that my presentation properly captured the sense of the jargon which we used.
Around the same time, Jamshed told us that there was to be an internal conference for ITT software development, and that abstracts for that were being accepted. The PTC (Programming Technology Center) was organizing this conference. I submitted abstracts for three (proposed) papers discussing my work:
After some time, Jamshed was able to report that
That done, he gave me back a copy of the abstract for “p”, which had redacted the reviewer's comments. I was able to read most of the comments; but not the reviewer's name (it's just as well, since the comments were not constructive). Setting those aside, I wrote a paper to match my abstract, and sent it in.
The PTC staff, it seems (from having reviewed resumes the following year) were generally my academic peers. However, discussing that topic with Jamshed early in 1983, I pointed out that ITT had no realistic career path for someone with my interests (either they would make a manager out of the person, or push him into a corner, to work on things that no one else understood). Aggravating this was the fact that administrative staff had picked up a (corporate) cultural idiosyncracy: in referring to someone above a certain level (no matter what credentials the person had acquired), they referred to the person as “doctor so and so” (fill in the name). Below that level, they omitted the title.
As a rule, I do not use academic titles unless there is money involved. That saves a lot of needless formality, and is more accurate.
The conference was in the Sheraton in New York City. It was impractical for me to stay overnight, so I drove each day (as long as two hours each way). There were a few other ATC staff at the conference (I recall Bill Paulsen, for example), perhaps a couple of hundred people altogether.
Papers were presented in side-rooms off from the auditorium. The speakers in the auditorium were solely PTC staff, plus an invited speaker (Edward Feigenbaum). In particular, I recall a succession of PTC staff who got up, one after another and introduced the next (PTC) speaker as a genius, etc., concluding with someone discussing in the vaguest possible terms a proposed system called Soma. He gave no clue regarding the system's goals or design. I asked Bill Paulsen who that person was, to warrant that type of introduction. Bill was only able to say that he was the person who translated Dijkstra's letter.
Presenting my paper was more rewarding, since the audience was fairly responsive. I have no printed program from this, but recall that Bill Curtis headed that session.
Also during June 1983, I transferred to a different department (again with Carlos, but this time reporting to Richard Wexelblat). In that department, there were people, Ruven Brooks for example, building rule-based expert systems (which probably made Feigenbaum of interest to them, as he was to others).
My initial assignment was to provide an assembler for the CAP (Cellular Array Processor) chip which Steven Morton's group was then designing. Someone had evaluated a few of the programs listed in Skordalakis's paper Meta-assemblers but found none which appeared suitable.
Shortly after joining Wexelblat's group (part of a department doing internal R&D), he, Artur Urbanski and some others in his group went to a conference. There was enough money, so I went as well. (I took the train, they flew). This was
SOFTFAIR, a conference on software development tools, techniques, and alternatives
Hyatt Regency Crystal City,
July 25-28, 1983
co-sponsored by IEEE Computer Society, National Bureau of Standards, ACM SIGSOFT.
While it was advertised as presenting several tools, I found the presentations too vague regarding the problems that had been solved in developing the tools (apparently, several had not reached the state of usability). As I told Artur, it seemed that the reason for presenting in SOFTFAIR was to obtain more money for research and development.
PTC's replanned workshop took place in Newport, Rhode Island.
Proceedings of the Workshop on Reusability in Programming
September 7-9, 1983
Length 295 pages
You can probably find references to this; mainly in papers that were republished in other venues. The apparent purpose in splitting it off from the June conference was to provide a suitable publication opportunity for PTC staff.
By the way, PTC staff variously referred to their organization in print as “ITT Programming”, “Programming North America”, and “Programming Technology Center”. I am using what appears to have been their actual title (as usual, corrections citing a reliable source are welcome). The center in Stratford actually comprised more than one center, including a training center, as well as Programming Applied Technology (which I recall being told was not the same as PTC).
In mid-November, Carlos had me come with him to Fairfield University. My understanding at the time was that it was an ITT workshop meeting (most of the attendees were from the PTC in Stratford); however it appears to have been a little more than that (a meeting of an advisory board for the university.
We were well fed (in the faculty dining room), and after lunch I was one of a half-dozen presenters. I gave a technical presentation for the meta-assembler which I had developed.
Although I presented well enough, the audience was not responsive: the lights were dim; there were fewer people than anticipated (a few left before the presentations began); one of the key members of the committee was not present (I heard someone comment that Bill Curtis had already left ITT). Another problem was the followup from the presentation. Carlos had not discussed the context beforehand. When I was done presenting, he elaborated on the possibilities of the technique, implying that there was proposed research to extend it. (I was aware of none).
Shortly after, Carlos left as well—as the deputy directory of a lab in Italy.
Later (perhaps early 1984), Ivan Cermak told us at a division meeting that the PTC was being merged into the ATC. One of the reasons given was that the PTC had overrun its budget by 50%.
IEEE Global Telecommunications Conference,
San Diego, California,
My principal reason for writing this paper was to help make
ITT 1240 more visible.
At the conference, I found that ESS 5 (the competition) was well represented, essentially dominating the session at which I presented.
Here are a few titles (found by web-searches) to help give a flavor:
My paper was in session 18:
18.2 Aspects of Network Maintenance in a Distributed Processing Environment,
T. E. Dickey and J. Mulla, ITT Advanced Technology Center,
The presenters for ESS 5 were not technical staff; thus we had little to talk about. Also, at the various gatherings in the conference, I did not meet any developers. The attendees were generally managers or marketing staff.
Fortunately, my paper was along the same lines (no discussion of nuts and bolts), so I did not waste my time.
Like the PTC, the R&D-style group at the ATC was trimmed down. I was one of the people chosen to form a new department developing and supporting CAD tools for VLSI. The department was formed before its plans were clearly defined.
Here is a link to a paper written several months after the department was established, outlining its goals.
After the new department's first meeting, I asked what I should be doing, and was told “we need an editor”. But the person that I asked could not explain what kind of editor. Discussing this with Craig Barilla (from the R&D group working on the CAP—Cellular Array Processor—chip), he told me that they had been trying to use a VLSI layout editor (part of a tool called HCAB). Going back to the CAD people, I relayed this information, asking if they would like me to work on HCAB. They agreed.
To expand a little:
I went to work on the integration and graphics issues. There was a junior developer assigned to help me (useful as long as we were deleting code, not so useful when writing new code—he spent half his time playing hack). Enrique Abreu (in Steve Morton's group) made fixes and improvements to the hierarchical aspect.
The resulting development effort kept me busy, but I still kept developing tools, such as an Imagen (printer) previewer, futran (Fortran portability aid), and flist (file-manager).
Moving to the CAD department stopped progress on spasm, and after I advised Craig Barilla that it was unlikely to be fast enough for their needs, they ultimately chose a different product. That was early 1985, around the time that the first CAP samples came back from the fabricator.
Late in 1985, ITT reduced the staff at the ATC by half. Here is the short explanation for why this happened:
BTM sent a team to the ATC to acquire the technical staff from the CAD department. What they offered was shift-work (sharing a workstation around the clock with 2-3 persons per machine), for 18 months to provide a transition period. When that was done, ITT would attempt to place the returned staff in a position in the United States. Not a deal for everyone, but at least one person went. On his return, he went to DEC.
At the end of 1985, I moved on to a different development effort and was too busy to consider presentations, etc. I recall interviewing at six locations. Three were dismissed early on due to geographic preferences. Another fell by the wayside after asking about how they were organized (seeing just an outline with 400 people disqualified them). I interviewed at IDA (where Richard Wexleblat went a couple of years later). The good choice was a company which you have not heard of, taking over development of a large set of programs which supported a TCP/IP network board. In contrast, IDA was looking people with advanced degrees, who would spend most of their time writing papers. In fact, that appears to be what Wexelblat did. (I do not recall encountering any credible software developers who went there, confirming my initial impression).
This work was, by the way, not connected to an outside network. My work on ded (a new program) and lincnt (a rewrite) at that time was based on my memory of the preceding programs rather than direct reuse.
During 1986, Thomas Moriarty followed up from a discussion of flist to tell me that the remainder of the ATC (in Stratford) was in the process of being sold to Alcatel. Reportedly (seen here and here) the process was messy.
In 1987, I left that development position because the people in charge (the vice president who hired me and the board's hardware designer) both left. They had come to this company about a year before me, and during that time, it was taken over by another company (with cash). The vice president announced in April that he was planning to leave, that he had agreed to stay on after the takeover for this amount of time. The following evening (Friday), the other developer came in around 11:30pm and copied all of the project's files to a tape. I found this had happened on Monday morning by using ded to review the access times for files (to remind me where I was on Friday) I advised the manager, who confronted him. (I was told later that he said he was just doing a backup). Notwithstanding that, he stayed on a few months after the manager left, leaving a few weeks before the hardware engineer.
That left me and a marketing person (in mid-October). Luckily, the Software Productivity Consortium (SPC) was hiring (and for about four years, they did software development).
Before leaving for SPC, I worked with the hardware engineer who took over the project, to ensure that our documentation was up to date, and assisting him in resolving a problem with the board's design.
During the first year that I was with the Software Productivity Consortium (SPC), I designed/developed one of the first set of deliverables to our member companies. This was a prototype application which demonstrated how requirements documents could be automatically linked together, allowing the user to navigate between the different aspects of a requirement (design, testing, etc.).
Brian Nejmeh set out to write a conference paper on the topic. After some time, he asked for my comments. Reading his draft, I could see that the presentation was rather muddled—no clear path through the verbiage. I suggested some changes, giving him a modified copy which deleted parts, moved other parts and filled in statements which were not obvious. He added me as a co-author (I do not recall seeing the final paper).
That was presented (by Nejmeh or Wartik) in San Francisco at the end of August 1989. I was not much interested in going; the previous year Nejmeh said there was money for conferences and I went to the Summer USENIX Technical Conference (June 20-24 1988) also in San Francisco. The only interesting paper was the one on spiff. I spent the rest of my time walking across the city (my time was well spent). Thereafter, if asked about going to a conference, I would reply that I preferred just getting the proceedings.
Nejmeh and Wartik worked on other papers, but this one found only one admirer (Ramesh Balasubramaniam).
Most of Nejmeh's and Wartik's other papers were either written in the same period (late 1980s to early 1990s) or followups from one of those. In contrast, though I still am associated with some programs from that era, most of my work is newer and ongoing.
Because Nejmeh was moving on to some more interesting endeavor, he apparently turned completion of this over to Wartik (who came in around the end of 1989). I worked with Wartik on followup work for the traceability project, as well as setting up an early version of TAE Plus before I moved to the Dynamics Assessment Toolset (DAT) project. In between (Nejmeh and DAT), Warktik attempted to get the four people reporting to him to develop a further traceability prototype. However, two were non-productive (one viewed SPC as an extended vacation, while the other had managed to acquire a PhD in computer science without learning to program). The former eventually melted away, back to his member company. The other left because of conflict of interest (publishing work he had done for SPC with his copyright notice and his one-man consulting company cited as owner). Getting the productive member of the team to produce was assigned as my responsibility by Wartik (the team lead), who did none of the development.
While there were a few cases of inappropriate copying (I sent the security officer after one, who was copying all of the files from my workstation as he was preparing to leave for another company), SPC staff were for the most part pretty well behaved. I could have written (or co-written) a paper or two based on the work that I was doing. However, I did not seriously consider this. SPC produced reports. Around the middle of my tenure there, they got interested in check-lists (and associated work-flow). It took 18 signatures to get a document out the door (not bad with fewer than 200 people). So I put that thought out of my mind, reasoning that any paper that I might contemplate would require just as much overhead.
Instead, I wrote user manuals for the programs I developed at SPC. I found that about a third of the problems surfaced in documenting. That is, in writing a description, I realized that I ought to check it, and found errors of omission or functionality.
After joining DAT, predictably, I wrote another large application on that project (an X Window application which presented results from DAT in a variety of formats). Judging by this report, Wartik continued working in the requirements area.
I worked on the second version of DAT. Not much was published from that, for the reason that the project failed to meet its scheduling objectives. A few of the team members decided, about a third of the way through the schedule, that they had a better idea of what to implement. They bypassed the team leader, getting upper manager to agree to the change in direction. However, they did not obtain a waiver for the scheduling change, and the resulting program was two-three months late (on a schedule constrained to be less than a year). Seeing this coming, I dug in my heels and resisted the temptation to make the delay worse. I had the impression that of the ten developers on the project, I was the only one to deliver what I had agreed to, on the original schedule. The project's team lead left.
The “better idea” was undoubtedly done for all of the best reasons, but it gave the management a bad experience. It was the last large (multi-person) development effort done at SPC that I was aware of.
Still, it was almost two years before I left (at the end of winter, 1994). I worked on prototypes whose scope was less ambitious (and I did all of the development work on those). Helping to get me moving was my last review (noting that my manager and I were good friends, even after this). He wrote it out as three points (quoting from memory):
Three-four months later, SPC reduced staff by about ⅛ (taking care to first fire someone who was not working the hours that he charged).
Fortunately, I had a friend from SPC—the person who had been the team lead for DAT. He knew someone who needed my skills.
In addition to my real work, I was developing ded and other programs during this time period (1985-1994). At this point, it is perhaps a good idea to review how I started this:
Coming into the next company, I advised them up front that I had programs which should be excluded from my employee rights assignment. They agreed, and the resulting piece of paper said that I am “sole proprietor” of the programs which I distribute on my website. In discussion, the points agreed upon was that I made no money from these (which was the big issue for potential conflict of interest), and that I did the development using my own resources. Working through the implications of that, I took additional care to avoid conflict of interest by not mentioning who pays me. This was (and is) a distinction from people who made newsgroup postings with the disclaimer that their posting does not express the views of big corporation XYZ.
I had “bought my own soapbox” (and the entry costs have declined enough to completely obviate a need for disclaimers of the type just mentioned), and to a large degree these activities are independent of whoever my employer happens to be during the intervening years. This is not the same as Heinlein's story about the man who bought his own cannon. Rather, it has provided me with a way to develop—and publish results—work whose duration and visibility would not be feasible in the paid work which I do.
Incidentally, none of the companies for which I used to work is in business any longer (though some trademarks were transferred to other companies). Even pictures of the large complexes are lacking.
Besides my published artifacts (programs, documentation and mailing archives), some of my work is recognized by others in the community. This is a list, ordered by date, of books where my name appears in the author's acknowledgements. That is, I contributed in some way to the book.
In the Acknowledgements section (page xvi), it says
Thanks to Keith Bostic, Steve Kirkendall, Bram Moolenaar, Paul Fox, Tom Dickey and Kevin Buettner, who reviewed the book. Steve Kirkendall, Bram Moolenaar, Paul Fox, Tom Dickey and Kevin Buettner also provided important parts of Chapters 8 though 12
and in Chapter 12 (page 230), notes
Currently vile maintenance is done “by committee,” with Tom Dickey being the primary maintainer. Paul manages the mailing lists.
and on page 235:
The screenshots and the explanation in this section were supplied by Kevin Buettner, Tom Dickey and Paul Fox. We thank them.
That is a lot of acknowledgement. As I recall, Kevin Buettner took the lead on supplying screenshots. I supplied the bulk of the description of the majormode feature.
Kurt Wall began his acknowledgements with this paragraph:
Now I get to blame all of the people responsible for this book. Thanks to Nick Wells, who got me involved in this through a seemingly innocuous post to a Caldera users' mailing list. I appreciate the folks on comp.os.linux.*, comp.lang.c, and comp.unix.programmer who answered an endless stream of questions. Thomas Dickey graciously reviewed the chapter on ncurses, catching silly errors.
In the acknowledgements, page xvii, Wall says
Thanks to all the net.citizens and the members of the Linux community who patiently and graciously answered my questions and offered suggestions. In particular, Thomas Dickey, current maintainer of the ncurses package, reviewed an early version of the ncurses chapters and corrected several boneheaded mistakes and lots of typos.
On the Credits page vii, I am the technical editor.
Dan Gookin also wrote this for the Acknowledgements, page xv:
I'd like to thank Thomas Dickey for his marvelous work augmenting my dive into the NCurses library. I truly appreciate his participation in this project and admire him not only for maintaining NCurses but working to assist others with their questions and problems. Thank you Thomas!
My participation on this project was by invitation: Dan Gookin suggested it to his acquisitions editor, who contacted me, and then Dan followed up. I checked the entire book for errors (found some minor ones), and tested each of his sample programs (about 220). The review began in October 2006, finishing in early November. The development editor sent me drafts, which I marked up with corrections.
Wiley paid me for this (they assumed that I would be paid, and I did not argue). For the hours spent, this amounted to about half what I am paid by my employer, so I do not view that as a conflict of interest.
In the Acknowledgements for the Seventh Edition (page xx), it says
Thanks to Keith Bostic and Steve Kirkendall for providing input on revising their editors' chapters. Tom Dickey provided significant input for revising the chapter on
vileand the table of
setoptions in Appendix B. Bram Moolenaar (the author of Vim) reviewed the book this time around as well. Robert P.J.Day, Matt Frye, Judith Myserson and Steven Figgens provided important review comments throughout the book.
The nature of my “significant input” is this: I started with the material from the Sixth Edition, Arnold Robbins's outline for the book (basically a page-limit), and sent him a draft for the chapter and appendix which covered all of the interesting new/improved features. According to my change history, I spent exactly a month working on this (from 2007-10-05 to 2007-11-05). That counted some fussing to get the DocBook 4.5 schema to work for me. Apparently Arnold Robbins liked it (a copy of the draft may be found here).
While I am looking at the acknowledgements in Chapter 18, I see that I overlooked updating the screenshot credits (I redid all of those for the Seventh Edition). Any other errors or omissions in the chapter and appendix are probably my fault.
There is an acknowledgement on page xxiii (one of 42 readers cited) in the third edition originating from a short email conversation I had with Rago in April 2005, regarding my criticism of Teer's book. He added an item on this page which reads
Thanks to the following people who reported typos or problems with APUE2e, or otherwise helped in some way. 1. Thomas Dickey pointed out that UNIX Curses Explained is out of print.
The conversation took place just after the release of the second edition. Based on my comments, Rago revised the relevant paragraphs to read
A description of terminfo and the curses library is provided by Goodheart , but this book is currently out of print. Strang  describes the Berkeley version of the curses library. Strang, Mui, and O'Reilly , provide a description of termcap and terminfo.
The ncurses library, a free version that is compatible with the SVR4 curses interface, can be found at http://invisible-island.net/ncurses/ncurses.html. It can also be found at http://www.gnu.org/software/ncurses.
The timing on this (relative to the second edition) was a coincidence. In this interview, it says that Rago worked on APUE2e for three years. From his page, I get more context. I have paper copies of the first two editions (see the book's website).
“Citations” differ from acknowledgements.
I did not actively contribute to the works where the citations are made.
Their respective authors provide the citation to show that they were using my work.
Here are a few which relate to my role as the developer/maintainer of various programs:
In the references, citation
 Steven De Toni and Thomas Dickey. C++ Beautifier V1.9, January 1997
refers to bcpp's use on page 7, as a formatting stage after C_UNPARSE.
This paper is a survey of metrics tools; c_count is mentioned, due to previous attention on Chris Lott's page.
The book introduces a ten-page section on dialog, saying
The dialog package is a nifty little tool originally created by Savio Lam and currently maintained by Thomas E. Dickey.
This mentions diffstat a half-dozen times, citing my website in the reference section.
This credits me for ncurses, e.g.,
The most important examples include the library for interfacing with the O'caml internal representation of values, some of which was described above, and a library for interfacing with the ncurses[Dic] C library for building command line user interface in which a small text editor has been built.
The FAQ points out that I maintain xterm. That was established long ago, e.g., Kuhn's comment in 1999, as well as Packard's comment for a FreeDesktop bug report in 2004. Likewise, Branden Robinson's comment in LWN.net in 2005 agrees with that (adding ncurses and dialog).
For xterm and ncurses in particular, documentation refers to me more often by my role (maintainer) than by name, e.g., in Jesse Thrysoee's page for xtermcontrol, or in Dagobert Michelsen's comments regarding recommended configuration for ncurses in the OpenCSW (Solaris) port.
This citation is not exactly a compliment:
Schulz identifies the developer/maintainers of elvis, nvi and vile, stating that only vim is actively developed.
(It is a reasonably safe assumption that the compatibility numbers were simply made up—another example of folklore).
Still, it is a citation, of sorts.
Those are citations of my work. Papers are a different matter.
The (two-page, counting the table) letter regarding Sackman and the 28:1 ratio has gotten the most citations. Some of the more interesting comments are shown via Microsoft Academic. Google Scholar found the majority of the list shown here (which I have ordered by date, and filled in details). The list is provided to demonstrate (to myself, at any rate) that the letter has become a lasting part of the relevant technical literature.
Most of the words which I have published are in the various FAQs (frequently asked questions) documents which I maintain:
In each case, my motivation was the same: reduce the amount of repetition I had to make in explaining the reasons for common problems and their respective solutions.
The ncurses and xterm FAQs are widely cited. Some of the more interesting ones:
In addition to citing, the xterm FAQ is included in the Debian xterm package.
Web searches and reading the bug-reporting systems finds me involved (and credited for my work). I will not try to duplicate that.
There are always potential biographical summaries. When Richard Stallman sent me mail in October 1999, advising me that I was “official” ncurses maintainer, he requested among other things that I write an entry for http://www.gnu.org/people/people.html. I did not (commenting to an associate at the time) that there was no point in doing so, because there was no check for content added there (aside from appropriateness). I do supply the content for the GNU page for ncurses.
The same comment would be true of most biographical summaries found in published works: the information is supplied by the author, and fact-checking seems to be perfunctory judging by the occasional scandals in that area.
That said, I have been told about my actual or proposed inclusion in these standard collections. I have not verified any of them. It seems that the publisher sends mail, saying that they are going to include you in some (specific) edition, that they want your biographical information. And, by the way, you can order a copy of the book (I recall prices around $100). Actually, I think that paying $100 to have my name in a book is a poor deal.
Herewith, the list (which I do not add to my resume):
There were 2-3 later requests for biographical data, probably while I was at the Software Productivity Consortium. Thinking back, it has been a while since I saw one of those—my visible activities starting with 1994 have been separate from employment, making me less interesting to those publishers.
Testimonials are less interesting than acknowledgements.
People do send me email making comments which could be used for this purpose (and I thank them). However, I do not publish these, having observed how testimonials are used (and abused).
For instance, in the mid-1990s I noticed testimonials added to the webpage for a well-known program. I recognized one of the names quoted there (someone who had made suggestions for some of the programs I develop). However, I recalled that this person was a regular contributor to that same program. I researched the names of the others providing testimonials, and found that half of the testimonials had been written by the developers themselves.
Enough said, perhaps.