Motivation of software professionals

  • Thread starter Stefan Kiryazov
  • Start date
A

Arved Sandstrom

Martin said:
On Sun, 14 Feb 2010 14:14:26 +0000, Arved Sandstrom wrote:
[ SNIP ]
That would never happen in the British civil service: the higher
management grades would feel threatened by such an arrangement. They'd
use sabotage and play politics until the idea was scrapped. I bet the
same would happen in the US Govt too.

I don't exactly see it happening at the federal or provincial or
municipal levels in Canada either. Not any time soon. Which is ironic,
because this would simplify the lives of PMs and higher-level managers.

AHS
 
S

Seebs

Really. I've not seen any free software which adopted all of
the best practices.

Bespoke software may. But go to a store that sells discs in boxes,
and tell me with a straight face that any of those boxes contain software
developed through a development operation which adpoted all of the best
practices.
In my experience, some of the best
practices require physical presense, with all of the developers
having offices in the same building. (The experiments I've seen
replacing this with email and chat haven't turned out all that
well.) This is far more difficult for a free project to achieve
than for a commercial one.

True. That said, I've been on distributed teams pretty much exclusively
for the last couple of decades, and I think they certainly *can* work. It
depends a lot on building a culture that's suited to it, and probably also
on having a few aspies involved. (It is nearly always the case that I
am less efficient in face-to-face meetings than I am in pure-text chat.)
First, free software doesn't have the highest quality. When
quality is really, really important (in critical systems), you
won't see any free software.

I'm not totally sure of this. Not *much* free software, probably.
It'd be interesting to see, though. Last I heard, there's no open
source pacemakers, but there are pacemaker programmers which are
built on open source platforms, because that's enough less critical.
(This is third-hand, so no citations.)
ClearCase uses a different model than any of the other version
management tools I've used. In particular, the model is
designed for large projects in a well run shop---if your
organization isn't up to par, or if your projects are basically
small (just a couple of people, say up to five), ClearCase is
overkill, and probably not appropriate. If you're managing a
project with five or six teams of four or five people each, each
one working on different (but dependent) parts of the project,
and you're managing things correctly, the ClearCase model beats
the others hands down.

Hmm. I'm not familiar with its model, but we've used git in that
environment and loved it. That said, we can't see the model for
the interface, which is pretty weak.
Did you actually try using any free software back in the early
1990's?

I did.

NetBSD was for the most part reliable and bulletproof during that time;
it ran rings around several commercial Unixes. I had no interest in g++;
so far as I could tell, at that time, "a C++ compiler" was intrinsically
unusable. But gcc was stable enough to build systems that worked reliably,
and the BSD kernel and userspace were pretty livable.
Neither Linux nor g++ were even usable, and emacs (by
far the highest quality free software), it was touch and go, and
depended on the version. Back then, the free software community
was very much a lot of hackers, doing whatever they felt like,
with no control. Whereas all of the successful free software
projects today have some sort of central management, ensuring
certain minimum standards.

I have no idea what you're talking about. I cannot point to any point
in the history of my exposure to free software (which predates the 1990s)
at which any major project had no central management. Linux was pretty
flaky early on, but then, in the early 1990s, all it had to do was be
more stable than Windows 3.1, which was not a high bar to reach for.

-s
 
S

Seebs

That I don't believe. I've seen a lot of particularly good
developers in industry as well. People who care about their
code---in fact, one of the most important things in creating a
good process is to get people to care about their code.

I don't see how that argues against free software being written, in
some or many cases, by "particularly good developers, who care about
their code."
In a well run development process, such feedback is guaranteed,
not just "expected". That's what code reviews are for.

True. But of commercial places I've worked, only a few have been anywhere
near as active in code reviews as the free software projects I've worked with.
I have not yet seen a free software project which wouldn't reject code purely
on the basis that it had poor style or didn't match coding standards. I have
seen several commercial projects which didn't. So while good commercial
places may be that good, I've seen nothing to suggest that free software
places aren't usually that good.
I'm far from sure about the "often", and I have serious doubts
about "hundreds"---you don't want hundreds of cooks spoiling the
broth---but that's more or less the case for the best run
freeware projects. Which is no different from the best run
commercial organizations, with the difference that the
commercial organization has more power to enforce the rules it
sets.

And more incentive to cheat around the edges. This is the essential
flaw of corporations.
ClearCase is by far the best version management system for
large, well run projects. It's a bit overkill for smaller
things, and it causes no end of problems if the project isn't
correctly managed (but what doesn't), but for any project over
about five or six people, I'd rather use ClearCase than anything
else.

Fascinating. Again, not what I've heard -- but to be fair, I've had
no call to use it.
That is, of course, a weakness of free software.

I don't think it's a weakness in free software; I think it's a weakness in
a liability model. The ability to modify code when it doesn't quite suit
is a huge advantage. If we were dependent on reporting bugs to vendors and
waiting for their fixes for everything we use, it would be a disaster.
We put up with it, for one hunk of code, because the cost of maintaining local
expertise would be prohibitive. Apart from that, no.

-s
 
S

Seebs

To be really effective, design and code review requires a
physical meeting. Depending on the organization of the project,
such physical meetings are more or less difficult.
Nonsense.

Code review is *not* just some other programmer happening to
read your code by chance, and making some random comments on
it. Code review involves discussion. Discussion works best
face to face.

IMHO, this is not generally true. Of course, I'm autistic, so I'd naturally
think that.

But I've been watching a lot of code reviews (our review process has named
reviewers, but also has reviews floating about on a list in case anyone
else sees something of interest, which occasionally catches stuff). And what
I've seen is that a whole lot of review depends on being able to spend an
hour or two studying something, or possibly longer, and write detailed
analysis -- and that kind of thing is HEAVILY discouraged for most people
by a face-to-face meeting, because they can't handle dead air.

Certainly, discussion is essential to an effective review. But discussion
without the benefit of the ability to spend substantial time structuring and
organizing your thoughts will feel more effective but actually be less
effective, because you're substituting primate instincts for reasoned
analysis.

I really don't think that one can be beaten. If what you need for a code
review is for someone to spend hours (or possibly days) studying some code
and writing up comments, then trying to do it in a face-to-face meeting would
be crippling. Once you've got the comments, you could probably do them
face-to-face, but again, that denies you the time to think over what you've
been told, check it carefully, and so on. You want a medium where words sit
there untouched by the vagaries of memory so you can go back over them.

But!

You do need people who are willing and able to have real discussions via text
media. That's a learned skill, and not everyone's learned it.

It is not universally true that discussion "works best face to face".
Certainly, there are kinds of discussions which benefit heavily from
face-to-face exposure. There are other kinds which are harmed greatly by
it. Perhaps most importantly, many of the kinds which are harmed greatly
by it *FEEL* much better face-to-face, even though they're actually working
less well.

The curse of being made out of meat is that your body and brain lie to
you. Knowing about this is the first step to overcoming the harmful side
effects.

-s
 
M

Martin Gregorie

I have no idea what you're talking about. I cannot point to any point
in the history of my exposure to free software (which predates the
1990s) at which any major project had no central management. Linux was
pretty flaky early on, but then, in the early 1990s, all it had to do
was be more stable than Windows 3.1, which was not a high bar to reach
for.
About the best free software I remember from that era was Kermit. It
worked and worked well and had ports to a large range of OSen and
hardware of widely varying sizes: I first used it on a 48 KB 6809 running
Flex-09 and still use it under Linux. It had an open development model
though it was managed within a university department, so the project
owners had pretty good control over it.
 
N

Nick Keighley

I know of system's that still poke data down 9600b lines.
The "standard" life of a railway locomotive is thirty or fourty
years.  Some of the Paris suburbain trainsets go back to the
early 1970's, or earlier, and they're still running.


Have you been to a bank lately, and seen what the clerk uses to
ask about your account?  In more than a few, what you'll see on
his PC is a 3270 emulator.  Again, a technology which goes back
to the late 1960's/early 1970's.

travel agencies seem to run some pretty old stuff

It depends on what you're writing, but planned obsolescence
isn't the rule everywhere.

I believe the UK's National Grid (the high voltage country-wide power
distribution system) wanted one-for-one replacements for very old
electonic componants. What had been a rats nest of TTL (or maybe
something older) was replaced with a board containing only a few more
modern components (maybe one). But the new board had to have the same
form factor, electrical power requirements etc. This becasue they
didn't want to actually replace the computers they were part of.

I know of software that runs on an emulated VAX.

Sometimes software far out lives its hardware.
 
M

Mr. Arnold

Andy Champ wrote:

<snipped>

What are you talking about? What is your point with this post, other
than some kind of rant?
 
B

Brian

[I really shouldn't have said "most" in the above. "Some"
would be more appropriate, because there are a lot of
techniques which can be applied to free development.]
I'm not sure what you are referring to, but one thing we
agree is important to software quality is code reviewing.
That can be done in a small company and I'm sometimes
given feedback on code in newsgroups and email.

To be really effective, design and code review requires a
physical meeting. Depending on the organization of the project,
such physical meetings are more or less difficult.

Code review is *not* just some other programmer happening to
read your code by chance, and making some random comments on
it. Code review involves discussion. Discussion works best
face to face. (I've often wondered if you couldn't get similar
results using teleconferencing and emacs's make-frame-on-display
function, so that people at the remote site can edit with you.
But I've never seen it even tried. And I note that where I
work, we develop at two main sites, one in the US, and one in
London, we make extensive use of teleconferencing, and the
company still spends a fortune sending people from one site to
the other, because even teleconferencing isn't as good as face
to face.)


It hadn't really dawned on me that my approach might be
thought of like that. The rabbis teach that G-d controls
everything; there's no such thing as chance or coincidence.
The Bible says, "And we know that all things work together
for good to them that love G-d, to them who are the called
according to His purpose." Romans 8:28. I get a lot of
intelligent and useful discussion on gamedev.net, here and
on Boost. It's up to me though to sift through it and
decide how to use the feedback. I've incorporated at
least three suggestions mentioned on gamedev and quite a
few more from here. The latest gammedev suggestion was to
use variable-length integers in message headers -- say for
message lengths. I rejected that though as a redundant
step since I'm using bzip for compression of data. I
thought for awhile that was the end of that, but then
remembered that there's a piece of data that wasn't
compressed -- the length of the compressed data that is
sent just ahead of the compressed data. So now, when
someone uses compression, the length of the compressed
data is generally also compressed with the following:
(I say generally because it depends on the length of
data.)


uint8_t
CalculateIntMarshallingSize(uint32_t val)
{
if (val < 128) { // 2**7
return 1;
} else {
if (val < 16384) { // 2**14
return 2;
} else {
if (val < 2097152) {
return 3;
} else {
if (val < 268435456) {
return 4;
} else {
return 5;
}
}
}
}
}


// Encodes integer into variable-length format.
void
encode(uint32_t N, unsigned char* addr)
{
while (true) {
uint8_t abyte = N & 127;
N >>= 7;
if (0 == N) {
*addr = abyte;
break;
}
abyte |= 128;
*addr = abyte;
++addr;
N -= 1;
}
}


void
Flush()
{
uint8_t maxBytes =
CalculateIntMarshallingSize(compressedBufsize_);
uint32_t writabledstlen = compressedBufsize_ - maxBytes;
int bzrc = BZ2_bzBuffToBuffCompress(reinterpret_cast<char*>
(compressedBuf_ + maxBytes),
&writabledstlen,
reinterpret_cast<char*>
(buf_), index_,
7, 0, 0);
if (BZ_OK != bzrc) {
throw failure("Buffer::Flush -- bzBuffToBuffCompress failed ");
}

uint8_t actualBytes = CalculateIntMarshallingSize(writabledstlen);

encode(writabledstlen, compressedBuf_ + (maxBytes - actualBytes));
PersistentWrite(sock_, compressedBuf_ + (maxBytes - actualBytes),
actualBytes + writabledstlen);
index_ = 0;
}


Those functions are from this file --
http://webEbenezer.net/misc/SendCompressedBuffer.hh.
compressedBuf_ is an unsigned char*. I've thought that the
calculation of maxBytes should be moved to the constructor,
but I have to update/improve the Resize() code first.
We've discussed the Receive function previously. I now have
a SendBuffer class and a SendCompressedBuffer class. This is
the SendCompressedBuffer version of Receive --

void
Receive(void const* data, uint32_t dlen)
{
unsigned char const* d2 = reinterpret_cast<unsigned char
const*>(data);
while (dlen > bufsize_ - index_) {
memcpy(buf_ + index_, d2, bufsize_ - index_);
d2 += bufsize_ - index_;
dlen -= bufsize_ - index_;
index_ = bufsize_;
Flush();
}

memcpy(buf_ + index_, d2, dlen);
index_ += dlen;
}


I doubt it. Making something free doesn't change your
development process. (On the other hand, if it increases the
number of users, and thus your user feedback, it may help. But
I don't think any quality problems with VC++ can be attributed
to a lack of users.)

I think it changes the development process. If it doesn't
then they probably haven't thought much about the implications
of making it free. They are in a battle of perception. Many
people have thought that Microsoft is a greedy company that
makes mediocre products. Giving away some software, while
going against their nature, is done, I think, to help improve
their image. They are forced into what 25 years ago would
have been unthinkable. I don't really think it will radically
improve their product either, though. As I've indicated I
don't think they are coming to the decision because they've
had a change of heart. It's more of a necessity being
imposed upon them. However, as I often say -- better late
than never.



Brian Wood
http://webEbenezer.net
(651) 251-938
 
J

James Kanze

Bespoke software may. But go to a store that sells discs in
boxes, and tell me with a straight face that any of those
boxes contain software developed through a development
operation which adpoted all of the best practices.

I've already stated that most commercial organizations aren't
doing a very good job either. There's a big difference between
what is feasible, and what is actually done.

[...]
I'm not totally sure of this.

I am. If only because such projects require a larger degree of
accountability than free software can offer. I can't see anyone
providing free software with contractual penalties for downtime;
most of the software I worked on in the 1990's had such
penalties.
 
J

James Kanze

James said:
Did you actually try using any free software back in the early
1990's [sic]?
Seebs said:
Same here.
That's pure fantasy.
I used a couple of Linux distributions in the early nineties,
and they worked better than commercial UNIX variants.

And I tried to use them, and they just didn't stop crashing.
Even today, Linux is only gradually approaching the level of the
Unixes back then.
I used emacs and knew many who used vi back then. They were
solid.

I used vi back then. It didn't have many features, but it was
solid. It was also a commercial product. Emacs depended on the
version. Some worked, some didn't.
I used gcc by the mid-90s and it was rock solid, too.

G++ was a joke. A real joke until the mid-1990's. It was usual
to find more bugs in the compiler than in freshly written code.
I used free software even as far back as the late 80s that
worked beautifully.
The facts to back up your assertions are not in evidence.

They are for anyone who is open and honest about it. I did
compiler evaluations back then, so I know pretty well what I'm
talking about. We measured the differences.
 
J

James Kanze

Nonsense.

The more channels you have available, the better communication
works.
IMHO, this is not generally true. Of course, I'm autistic, so
I'd naturally think that.

There are probably some special exceptions, but other peoples
expressions and gestes are a vital part of communications.

Not to mention the informal communications which occur when you
meet at the coffee pot. I've worked from home, and in the end,
I was frustrated by it because I was missing so much of the
informal communications which make things go.
But I've been watching a lot of code reviews (our review
process has named reviewers, but also has reviews floating
about on a list in case anyone else sees something of
interest, which occasionally catches stuff). And what I've
seen is that a whole lot of review depends on being able to
spend an hour or two studying something, or possibly longer,
and write detailed analysis -- and that kind of thing is
HEAVILY discouraged for most people by a face-to-face meeting,
because they can't handle dead air.

That sort of thing is essential for any review. You do it
before the face-to-face meeting. But the reviewer isn't God,
either; the purpose of the meeting is to discuss the issues, not
to say that the coder did it wrong.
Certainly, discussion is essential to an effective review.
But discussion without the benefit of the ability to spend
substantial time structuring and organizing your thoughts will
feel more effective but actually be less effective, because
you're substituting primate instincts for reasoned analysis.
I really don't think that one can be beaten. If what you need
for a code review is for someone to spend hours (or possibly
days) studying some code and writing up comments, then trying
to do it in a face-to-face meeting would be crippling. Once
you've got the comments, you could probably do them
face-to-face, but again, that denies you the time to think
over what you've been told, check it carefully, and so on.
You want a medium where words sit there untouched by the
vagaries of memory so you can go back over them.

You do need people who are willing and able to have real
discussions via text media. That's a learned skill, and not
everyone's learned it.
It is not universally true that discussion "works best face to
face".

Almost universally. Ask any psychologist. We communicate
through many different channels.
 
S

Seebs

I am. If only because such projects require a larger degree of
accountability than free software can offer. I can't see anyone
providing free software with contractual penalties for downtime;
most of the software I worked on in the 1990's had such
penalties.

I think it may be done occasionally. Certainly, if I had contractual
penalties for downtime, and my choices were Windows or Linux, I'd
run free software. :p

-s
 
S

Seebs

And I tried to use them, and they just didn't stop crashing.
Even today, Linux is only gradually approaching the level of the
Unixes back then.

I guess it depends on which unixes, and which Linux. When I went from
SVR4 Unix to NetBSD, though, I had a LOT less downtime.
I used vi back then. It didn't have many features, but it was
solid. It was also a commercial product. Emacs depended on the
version. Some worked, some didn't.

The version I used (nvi) was nearly-rock-solid. Which is to say, I
found and reported a bug and it was fixed within a day. And I've been
using the same version of nvi that I was using in 1994 ever since, and
I have not encountered a single bug in >15 years.
G++ was a joke. A real joke until the mid-1990's. It was usual
to find more bugs in the compiler than in freshly written code.

I said gcc, not g++. And while, certainly, it has bugs, so has every
other compiler I've used. I had less trouble with gcc than with sun
cc. I used a commercial SVR4 which switched to gcc because it was
noticably more reliable than the SVR4 cc.
They are for anyone who is open and honest about it. I did
compiler evaluations back then, so I know pretty well what I'm
talking about. We measured the differences.

I do not think it is likely that implying that anyone who disagrees
with you is being dishonest will lead to productive discussion. My
experiences with free software were apparently different from yours --
or perhaps my experiences with commercial software were different.

Whatever the cause, the net result is that by the mid-90s, I had a strong
preference for free tools and operating systems, because they had
consistently been more reliable for me.

-s
 
S

Seebs

The more channels you have available, the better communication
works.

Not so. Some channels can swamp others. If you're busy picking up
facial expressions, instead of properly processing the raw data, the
extra channel has HARMED your quality of communication.
There are probably some special exceptions, but other peoples
expressions and gestes are a vital part of communications.

They may well be -- but my experience has been that you can communicate
some things much better without them.
Not to mention the informal communications which occur when you
meet at the coffee pot. I've worked from home, and in the end,
I was frustrated by it because I was missing so much of the
informal communications which make things go.

I would miss that, except that in my workplace (which spans several
continents), the "coffee pot" is IRC.
That sort of thing is essential for any review. You do it
before the face-to-face meeting. But the reviewer isn't God,
either; the purpose of the meeting is to discuss the issues, not
to say that the coder did it wrong.

If you do it well enough, I don't think the face-to-face meeting does
anything but cater to superstition.
Almost universally. Ask any psychologist. We communicate
through many different channels.

I do, in fact, have a psych degree. And what I can tell you is that, while
there are many channels, sometimes you get better or more reliable
communication by *suppressing* the non-analytic channels. Say, if you
were trying to obtain accurate data about a thing subject to pure analysis,
rather than trying to develop a feel for someone else's emotional state.

The goal is not to have the largest possible total number of bits
communicated, no matter what those bits are or what they communicate about;
it's to communicate a narrowly-defined specific class of things, and for
that plain text can have advantages.

Most people I know have had the experience of discovering that a particular
communication worked much better in writing than it did in speech. Real-time
mechanisms can be a very bad choice for some communications.

You get more data per second if you are watching ten televisions than if
you're watching only one. That doesn't mean that, if you want to learn a
lot, the best way to do it is to watch multiple televisions at once. For
that matter, while a picture may be worth a thousand words, sometimes it's
only worth the exact thousand words it would take to describe the picture.
Why would we read code when we could watch a movie of someone reading it,
complete with facial expressions, tone, and guestures?

Because facial expressions, tone, and guestures swamp our capacity to
process input, and leave us feeling like we've really connected but with
a very high probability of having completely missed something because
we were too busy being connected to think carefully. It's like the way
that people *feel* more productive when they multitask, but they actually
get less done and don't do it as well.

-s
 
A

Alf P. Steinbach

* Seebs:
[snippety]
Why would we read code when we could watch a movie of someone reading it,
complete with facial expressions, tone, and guestures?

You might notice the person picking his/her nose all the time, and goodbye.

Or, you might notice that hey, that's [insert name of Really Attractive Lady
here], and and that's who I'll be working with via the net? Hah. Better invite
her to a working lunch, or something.

Otherwise, if learning about the code is your interest, skip the video.


Cheers & hth.,

- Alf
 
N

Nick Keighley

I've got mixed opinions on this. The real review takes place off line.
Explanation and discussion of possible solutions (I know, a code
walthru isn't supposed to consider solutions- a daft idea if you ask
me [1]) at a meeting.

Design meeetings can work.

[1] my review comments would then read "I know a much better way to do
this! Can you guess what it is {he! he!}?"


sometimes he *is* wrong! Some things get discussed/argued to death.
There is nothing more tedious than going through everey written
comment on a long list. I'd rather skip along. "Ok you accept items
1-9 but you don't understand what 10 is about- lets discuss item 10"

If you do it well enough, I don't think the face-to-face meeting does
anything but cater to superstition.

I find it easier to communicate with someone if I've met him in layer
1 at least once. I like a mental picture of who I'm talking to. Um
except here...


Most people I know have had the experience of discovering that a particular
communication worked much better in writing than it did in speech.  Real-time
mechanisms can be a very bad choice for some communications.

try dictating hex patches down a phone line. Or unix commands. You
rapidly discover Italian vowels aren't the same as ours. "ee? do you e
or i?"

Imagine if all programming specifications had to delivered in a
speech. Or chanted in iambic pentameter (no, spinoza, that's not a
challenge)
 
A

Arved Sandstrom

James said:
James said:
Did you actually try using any free software back in the early
1990's [sic]?
Seebs said:
Same here.
That's pure fantasy.
I used a couple of Linux distributions in the early nineties,
and they worked better than commercial UNIX variants.

And I tried to use them, and they just didn't stop crashing.
Even today, Linux is only gradually approaching the level of the
Unixes back then.
[ SNIP ]

I have to agree with you here. My earliest use of Linux was 1993, side
by side with IRIX and SunOS. I don't remember frequent crashing of Linux
but there was no question but that the UNIX systems were more stable,
more polished and had more capability. Granted, everyone back then was
throwing Linux on old PCs, which probably didn't help, but still...

AHS
 
A

Arved Sandstrom

Seebs said:
Not so. Some channels can swamp others. If you're busy picking up
facial expressions, instead of properly processing the raw data, the
extra channel has HARMED your quality of communication.


They may well be -- but my experience has been that you can communicate
some things much better without them.


I would miss that, except that in my workplace (which spans several
continents), the "coffee pot" is IRC.


If you do it well enough, I don't think the face-to-face meeting does
anything but cater to superstition.


I do, in fact, have a psych degree. And what I can tell you is that, while
there are many channels, sometimes you get better or more reliable
communication by *suppressing* the non-analytic channels. Say, if you
were trying to obtain accurate data about a thing subject to pure analysis,
rather than trying to develop a feel for someone else's emotional state.

The goal is not to have the largest possible total number of bits
communicated, no matter what those bits are or what they communicate about;
it's to communicate a narrowly-defined specific class of things, and for
that plain text can have advantages.

Most people I know have had the experience of discovering that a particular
communication worked much better in writing than it did in speech. Real-time
mechanisms can be a very bad choice for some communications.
[ SNIP ]

There is absolutely no question but that some things - many things -
work better in written form than in speech. Requirements specifications,
design documents, test plans and code itself are good examples.

As for code reviews I believe those can go either way. It depends on
skill levels overall, skill level differences, personalities, and
problems (or lack thereof) with prerequisite artifacts like design and
requirements. A code review that involves dysfunctional prerequisites,
dubious skill levels amongst the coders, and lots of ego - sort of a
common situation actually - is probably best handled f2f. IMHO.

But I've certainly seen code reviews that were handled nicely with no
personal interaction other than email or chat. This usually happened
when good requirements and design informed the whole discussion, all the
programmers were skilled, and the egos weren't too large.

A lot of times in a real code review you most definitely are managing
emotional state. That requires developing a feel for it, which you can't
do over chat. Seeing those blank expressions, or looks of anger, is
quite helpful in steering the review towards a somewhat productive
conclusion.

AHS
 
S

Seebs

A lot of times in a real code review you most definitely are managing
emotional state. That requires developing a feel for it, which you can't
do over chat. Seeing those blank expressions, or looks of anger, is
quite helpful in steering the review towards a somewhat productive
conclusion.

You make a good point here. With some experience, you can learn to preempt
a lot of that by attention to wording. At $dayjob, we have a couple of
techniques applying to this:

1. Code review is done on a list everyone sees. (We're still small enough
to get away with that, for now.)
2. Everyone's code is reviewed, even the "senior" people.
3. Over time, anyone watching the list will see enough people, some of them
quite senior, caught in mistakes or oversights, that they'll develop a
good feel for how to handle that.

It works surprisingly well. When you've seen a couple of the senior staff
say "whoops, yeah, I totally missed that, nevermind, I'll submit a V2 in a
day or two", it becomes much less worrisome to be asked to fix something.

-s
 
L

Lew

Branimir said:
I admire you ;)

I used to walk secretaries through swapping out or otherwise repairing
motherboard components over the phone back in the early 90s as part of my
tech-support role. Our customers also often had very weird software issues
that we'd help with.

They would only call my employer after having at least one other "expert" make
the problem worse.

The key was to make them take a break of at least an hour before I would help
them, usually out of their office at a park or some other nice place.

People aren't usually stupid, and if they're highly motivated to solve a
problem you can get them through almost anything if you are kind, empathetic
and very, very patient. There was a science to it also - my techniques were
not random. Starting with assuming competence on the part of the customer.
There's a difference between ignorance and stupidity; the former is curable.

I compare it to being the guy in the airport control tower who talks young
Timmy through how to land the plane after the pilot has a heart attack.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top