For programmers...

That's not all of them. Discr can be negative.

These are doubles, not complex's.

And those are the real roots. You can only have pure imaginary roots if B is zero and A and C have the same sign.

A=1
B=1
C=1

Still pure imaginary roots. Anytime B^2 < 4AC, you have imaginary roots.
 
No. The real part is -1/2 for that case. They are complex, not imaginary.

Complex and imaginary are two names for the same concept.

sqrt(-1) is an imaginary number, represented in complex form as "i". Sqrt(-4) is 2i.

Any time b^2 < 4ac, the roots are imaginary,
 
Nope. 2i (or 2j if you're an EE) is imaginary. 2 is real. 2+2i is complex.

With your definition, all real numbers are imaginary.

They are indeed complex, but imaginary requires a factor of i.

Sometimes we have use for pure imaginary numbers, such as in describing phase response of single capacitors or simple I and/or D controllers.

I think what you're trying to say is that the solutions are conjugate pairs or both real. They are not often pure imaginary, but it's possible.

And sqrt(-4) is +/-2i. That there are two solutions is important, or you've made a bug.
 
Last edited:
Imaginary numbers and real numbers are subsets of complex numbers.
 
I feel better.
Just realized I am less nerdy then some of you.
 
I was told to always say "complex" because if you start talking in public about imaginary numbers people start to wonder about mathematicians.
 
I once witnessed two skinny-armed geeks get into a fistfight over which CPU cooler was better. That's nerdy.

-Rich

:rofl::rofl:

I watched two developers get in a no kidding bar fight after a debate about the benefits of strongly typing objects spun out of control.

It was the dumbest thing to watch these guys go from programing talk to a fist fight. They were quite sauced at the time.
 
I got stuck in the middle of a "Enterprise vs Imperial Death Star" argument between two guys one time. They gave up because they couldn't come to any agreement. Then they went on to 'transporters'. That's when I asked - "If a transporter takes matter, disassembles it, moves it, then reassembles it, why couldn't you keep a backup of yourself?"

That kept them going for a while, before they finally said, "It just doesn't work that way."

I guess I'm the one that doesn't get it...

At least they weren't fighting about the advantages of Forth. (One those two guys would have, I'm sure.)
 
:rofl::rofl:

I watched two developers get in a no kidding bar fight after a debate about the benefits of strongly typing objects spun out of control.

It was the dumbest thing to watch these guys go from programing talk to a fist fight. They were quite sauced at the time.

Let me guess...

a) One of them develops in Python
b) "Duck typing" was mentioned
 
Or Fortran. Yes, people still use it. The only reason I can think of why is that they haven't been bitten by accidental variables set to zero by mistyping a variable name.

Haven't been around FORTRAN in a long time have you?
 
Or Fortran. Yes, people still use it. The only reason I can think of why is that they haven't been bitten by accidental variables set to zero by mistyping a variable name.

To say that FORTRAN is not a structured language...

OK, my last exposure is FORTRAN77, and there if you didn't put something in the right column it would blow the hell up.
 
To say that FORTRAN is not a structured language...

OK, my last exposure is FORTRAN77, and there if you didn't put something in the right column it would blow the hell up.

Haven't been around FORTRAN in a long time have you?

is there an echo in here? lots of legacy FORTRAN out there
 
OK, my last exposure is FORTRAN77, and there if you didn't put something in the right column it would blow the hell up.

Fortran 90 is kinda like C with global variables, call-by-reference, and weak, implicit typing. And object orientation no one uses.

Fortran 77 was still meant to be written on punch cards. That's what the columns were for.
 
Haven't been around FORTRAN in a long time have you?

I haven't chosen to develop new code in it in some time -- 2000, to be precise -- but it is still in use in this room, today. And there is a lot of legacy code around from 1975. Like the Kuiper flight planner.

Yes, I know about IMPLICIT NONE. And about the OO no one uses. And when you're modifying legacy code, you don't usually have the option of changing its implicitness.

I'm also aware it has become quite a lot more C++ like. What the point of making a second C++ is, is beyond me. It gets defeated very quickly when you have to use legacy constructs.
 
Or Perl...

Perl is fantastic for not needing to do anything even related with standards!

We used to call it the "permanently eclectic rubbish lister."

I abandoned most of my work in that language when the transition from PERL4 to PERL5 broke nearly everything I had, about a year before I defended my dissertation. I will never trust Larry Wall again.
 
Hey, I admitted that I was talking FORTRAN77!

:) Using formatted reads was even worse. Helping undergrads debug their input data files for commercial numerical computation programs was pretty much a nightmare. Of course that experience gave me an edge in industry when I could debug or modify the output of the first few generations of pre-processors...and harvest specific output data.

Now the pre- and post-processors and computation engines have moved on and I can't even begin to read the data files. I'm reduced to looking a pretty pictures on the screen. :-( Good thing I'm in management I guess.
 
If you feel the need to comment the code - then the code sucks and you need to rewrite it.
 
If you feel the need to comment the code - then the code sucks and you need to rewrite it.

Meh.

I have some code in VB that checks about 6 different things and then dumps to the appropriate tables. It's much easier to find my comment line where I call out which table the following lines dump to vs looking for a table name in my SQL statement.
 
Meh.

I have some code in VB that checks about 6 different things and then dumps to the appropriate tables. It's much easier to find my comment line where I call out which table the following lines dump to vs looking for a table name in my SQL statement.

Not a problem if:

- classes and methods are named properly
- source code files are positioned in a way that makes sense
- project is well tested with unit and integration tests that accurately describe what the test does
- you look at code all day for a living

In my world, where every single line of code is peer reviewed before it's ever used and strict standards are adhered to it's considered a sign of slop to have to comment something. If I can't immediately tell what I'm looking at when I review something then the code is never used because it's not maintainable.
 
Yup, you did something stupid and lived. Therefore, no one else dies from stupidity, even with actual detonations. Gotcha.


The coders in this thread and the real world do stupider stuff with your bank accounts, I promise.
 
Not a problem if:

- classes and methods are named properly
- source code files are positioned in a way that makes sense
- project is well tested with unit and integration tests that accurately describe what the test does
- you look at code all day for a living

In my world, where every single line of code is peer reviewed before it's ever used and strict standards are adhered to it's considered a sign of slop to have to comment something. If I can't immediately tell what I'm looking at when I review something then the code is never used because it's not maintainable.

It only works that way when you code applications that require little or no domain knowledge.

Image processing has been given as an example. It's a very valid example. Anything intensive in numerics is extremely error prone if you depend on what you describe.

For an example, a subcontractor wrote telescope pointing code using the exact methodology you describe. It couldn't point to the right part of the sky. I took over the algorithm development requiring a much more substantial analysis phase, along with proper code documentation. Now, it works.

It's not enough to recognize that a vector algorithm is being used. You also need to understand the details of the coordinate systems, sense of rotations, corrections applied, interpolation algorithms (they are NEVER trivial on the sphere and if you try to do what appears natural, you will be wrong).

There is much more to some of these algorithms than can be specified in variable names and style guides. Unit tests are critical -- and an appropriate unit test whitepaper was how I got the **** telescope into the right part of the sky -- but they are not an appropriate place to describe algorithms being used. Those are input -> output, not how you get there. Unless you're willing to write many thousands of them. That's cost prohibitive even in government.
 
It only works that way when you code applications that require little or no domain knowledge.

Image processing has been given as an example. It's a very valid example. Anything intensive in numerics is extremely error prone if you depend on what you describe.

For an example, a subcontractor wrote telescope pointing code using the exact methodology you describe. It couldn't point to the right part of the sky. I took over the algorithm development requiring a much more substantial analysis phase, along with proper code documentation. Now, it works.

It's not enough to recognize that a vector algorithm is being used. You also need to understand the details of the coordinate systems, sense of rotations, corrections applied, interpolation algorithms (they are NEVER trivial on the sphere and if you try to do what appears natural, you will be wrong).

There is much more to some of these algorithms than can be specified in variable names and style guides. Unit tests are critical, but they are not an appropriate place to describe algorithms being used. Those are input -> output, not how you get there.
Proper tests of all of those methods accomplish just what you're saying. The tests describe the code and then actually check to make sure it does what you say it does.

Just because you comment something does not mean that the code actually does what the comment says. A few years later and it most likely doesn't at all. Good tests are comments that actually verify themselves to make sure they remain true.

A spagetti-cluster **** of code that doesn't do what it implies it does is not what I'm advocating for.
 
Proper tests of all of those methods accomplish just what you're saying. The tests describe the code and then actually check to make sure it does what you say it does.

Just because you comment something does not mean that the code actually does what the comment says. A few years later and it most likely doesn't at all. Good tests are comments that actually verify themselves to make sure they remain true.

A spagetti-cluster **** of code that doesn't do what it implies it does is not what I'm advocating for.

Well, we've got standards that say the comments DO describe what the code does. And that gets reviewed.

There is a huge difference between a standard numerical algorithm and spaghetti. Have you ever worked with actual spaghetti? Take a look at the original SPICE or POISSON (successive over-relaxation) from the 70s. No one writes like that anymore.
 
Testing gets lip service in meetings until you bring up that it'll take a week to test the payroll code that's broken and due to run tomorrow.

I think it would take more than two hands to count the number of times I've seen that happen at multiple employers. Money trumps all.

In the meeting the Devs all nodded and said it was fully regression tested already even though you know it was written five minutes ago.

If you ask "with what data?", knowing there's no non-Production data-set and the copy to Dev takes two days, they'll just say it's time to move on to the "hit the beach" implementation plan for tomorrow night.

Senior sysads know where all the bodies are buried. And which Devs code to trust in those scenarios and which ones will be being patched for two weeks complete with daily emergency meetings. The emergency being caused by breaking process will be summarily ignored therein.

My favorite was when an old boss exclaimed that the parallel port to Ethernet converter buried inside the case of the company's new "flagship" product wasn't engineered, it was a "****ing kludge" in front of the number two guy in the company and was not thanked for signing off on it anyway because the ship date was tomorrow, but was asked to go talk to HR about his language. LOL.

The project to integrate a proper Ethernet chipset to a custom board started the next day. All of the kludges ran poorly in the field for over a year until they could all be replaced. For free to the customers, at great expense to the Support Manager and his department's staff, who called it the kludge and was forced to sign the Change Control Board document to allow it out the door.
 
Image processing has been given as an example. It's a very valid example. Anything intensive in numerics is extremely error prone if you depend on what you describe.

This. Simultaneous solutions to linear equations gets just a little bit complex. Now make the equations non-linear and it gets more difficult. Comments help. It was really nice when we got long variable names...there are some matrix functions available in FORTRAN 95 that aren't efficient but do make for some pretty code. Maybe folks could write un-commented code with them. I never really played with them much.
 
This. Simultaneous solutions to linear equations gets just a little bit complex. Now make the equations non-linear and it gets more difficult. Comments help. It was really nice when we got long variable names...there are some matrix functions available in FORTRAN 95 that aren't efficient but do make for some pretty code. Maybe folks could write un-commented code with them. I never really played with them much.

Maybe an example might help.

x1 = x0*cos(theta) - y0*sin(theta);
y1 = x0*sin(theta) + y0*cos(theta);

Anyone who has done ANY computational geometry knows exactly what that is. Question: Is that a forward rotation or an inverse rotation?

Good luck.
 
Not a problem if:

- classes and methods are named properly
- source code files are positioned in a way that makes sense
- project is well tested with unit and integration tests that accurately describe what the test does
- you look at code all day for a living

In my world, where every single line of code is peer reviewed before it's ever used and strict standards are adhered to it's considered a sign of slop to have to comment something. If I can't immediately tell what I'm looking at when I review something then the code is never used because it's not maintainable.

It's easier for me to scroll through and look for the green comment highlights than look for table names in black and white.

I use pretty much the exact same code just change the names of the fields or tables. It's easy enough to see what's going on but when I need to tweak something where I know it's going to be sending it to the invoices table vs the backorder, blanketorder, demo, stockbackorder, or consignment table. Especially when the code is 90% the same for each section.
 
Maybe an example might help.

x1 = x0*cos(theta) - y0*sin(theta);
y1 = x0*sin(theta) + y0*cos(theta);

Anyone who has done ANY computational geometry knows exactly what that is. Question: Is that a forward rotation or an inverse rotation?

Good luck.

Oh god. Flashbacks to working on vision systems in the 90s.
 
It's easier for me to scroll through and look for the green comment highlights than look for table names in black and white.

I use pretty much the exact same code just change the names of the fields or tables. It's easy enough to see what's going on but when I need to tweak something where I know it's going to be sending it to the invoices table vs the backorder, blanketorder, demo, stockbackorder, or consignment table. Especially when the code is 90% the same for each section.

If the code is 90% the same for each section then that's your problem. Time to refactor. Strong tests makes refactoring easy.
 
Well, we've got standards that say the comments DO describe what the code does. And that gets reviewed.

There is a huge difference between a standard numerical algorithm and spaghetti. Have you ever worked with actual spaghetti? Take a look at the original SPICE or POISSON (successive over-relaxation) from the 70s. No one writes like that anymore.

The comments are the tests, the tests were written before the code, and the code ensures the tests get satisfied.

Anyone that says plain comments or documentation reflects what the code actually does either is lying or doesn't write much code.
 
Back
Top