How the GPS system works and why April 6, 2019 is so dang important.

I got a text message from AT&T the other day since I still run a Samsung Galaxy S2, which noted that I might lose GPS accuracy/availability on that phone due to 3G GPS rollover changes. Since it sits on WiFi at the house all day every day, I'm not too worried about it.
 
I’m not falling for that. SixPackinCharlie and some cheap glasses with a bad British accent... this isn’t up to his usual standards. I like the bad Morgan Freeman voiceover guy better v
 
Bah humbug.

https://insidegnss.com/schriever-air-force-base-announces-next-gps-week-number-rollover/

Schriever is home to US GPS. Just east of KCOS. Tiny little R area, I go by it all the time.

A bit of trivia...why 1023 weeks? Because it's 2^10 -1 (2^10 is 1024 and computer people start counting at 0) So the week is a 10 bit value in the software. Why 10? I dunno. Us software geeks prefer powers of 2, not multiples of 2. But the original hardware may not have allowed 16 bits.
 
Bah humbug.

https://insidegnss.com/schriever-air-force-base-announces-next-gps-week-number-rollover/

Schriever is home to US GPS. Just east of KCOS. Tiny little R area, I go by it all the time.

A bit of trivia...why 1023 weeks? Because it's 2^10 -1 (2^10 is 1024 and computer people start counting at 0) So the week is a 10 bit value in the software. Why 10? I dunno. Us software geeks prefer powers of 2, not multiples of 2. But the original hardware may not have allowed 16 bits.
They probably used the other 6 bits to store something else important. Back when the system was developed, it was common to try to save bytes where you could.
 
And Y2K was to bring the end of the digital world too. :rolleyes:
The reason that Y2K wasn't a disaster was that we fixed the systems that needed to be fixed. I did a couple of large projects for the DoD; they came down to the wire.
 
I heard some in panel aviation units are affected; anyone know if GNS-530Ws are? (mine)
 
You’re telling me that my generation may have to start using old paper maps?!? F that!!

:D:D
 
The reason that Y2K wasn't a disaster was that we fixed the systems that needed to be fixed. I did a couple of large projects for the DoD; they came down to the wire.
I work in the industry and the ones that didn't get "fixed" still didn't have any issues. There was no problem. Sure there was probably an outlier here or there but it was not what it was made out to be. I didn't mind it because we were selling material like hotcakes because everybody was concerned but there was going to be issues all across the board.
 
The reason that Y2K wasn't a disaster was that we fixed the systems that needed to be fixed. I did a couple of large projects for the DoD; they came down to the wire.
More important, many systems were designed and built to avoid issues like Y2K.
 
Let me also point out that the US GPS satellites are not launched and left. Updates are done constantly. And I mean all the time....
 
I work in the industry and the ones that didn't get "fixed" still didn't have any issues. There was no problem.

Well, then you got very, very lucky in terms of the systems you worked with/on. I worked a bunch of projects that were tested and failed, which is why we were remediating them. Hell, entire systems were scrapped and replaced (with the associated millions of dollars in expenses) because they were tested, failed and the source code for some/all of the system had been lost so no repairs were possible.

To pretend that Y2K was "no problem" is simply insane.
 
To pretend that Y2K was "no problem" is simply insane.
As is pretending the whole thing was not vastly overblown. The panicky horror stories about elevators and airplanes crashing to the ground, all that nonsense. I worked as a network engineer for a major brokerage at the time, and nearly every engineer and sys admin had to sit there all freaking night despite our repeated assurances to management that NONE of our systems would have the slightest issue. Hell, most of our network equipment weren't even talking to NTP servers back then, they just knew the time since boot. We'd tested the crap out of everything, well in advance. No matter, we had to be fully staffed all night "just in case". THAT was insanity.
 
I had a minor Y2K problem with my CAD software, one piece of it changed the year from 99 to 100, and the other couldn't accept a 3 digit date code so it refused to load the file. A minute with a text editor to fix the few affected files as they were opened over the next couple of months and all was well.
 
No matter, we had to be fully staffed all night "just in case". THAT was insanity.
So you had to work a late night, nothing bad happened, and you were paid to do so? ....does not sound insane...sounds like doing your job...I'm sure management expected nothing to happen as well, but, that doesn't mean you don't control for the risks.

Very common. I pay engineers to be available during periods of increased risk, simply because, that gives us increased response time. If we've done our jobs right, they won't have to do any work.

Security guards spend all of their time sitting around "just in case".
We teach emergency procedures to pilots "just in case".
I guess that's insane too? Or perhaps...a guy was being a little hard on his management?
 
So you had to work a late night, nothing bad happened, and you were paid to do so? ....does not sound insane...sounds like doing your job...I'm sure management expected nothing to happen as well, but, that doesn't mean you don't control for the risks.

Very common. I pay engineers to be available during periods of increased risk, simply because, that gives us increased response time. If we've done our jobs right, they won't have to do any work.

Security guards spend all of their time sitting around "just in case".
We teach emergency procedures to pilots "just in case".
I guess that's insane too? Or perhaps...a guy was being a little hard on his management?
We had probably 40-50 salaried people working unpaid OT to sit around on our collective asses all night. Every one of us had been involved in testing and evaluating all of our systems and applications to make certain they either had no date/time dependencies at all or were all using 4-digit year dates. We could easily have had one or two people from each team on call to handle any issues or emergencies. Nope... had to have a building full of people to watch exactly what we had told them would not happen, not happen.

As I said, completely overblown.
 
As is pretending the whole thing was not vastly overblown.
I remember New Years eve 1999 and all the media hype surrounding it like it was yesterday. The city of Phoenix put on a huge block party downtown and had hired a bunch of big name bands and performers. What was supposed to be an estimated crowd of 200,000+ people turned out to only be about 1000 people if that. I remember my wife and I riding our Harley through what little crowd there was checking out all the bands and vendor booths. We'd ride over to Waylon Jennings and park and listen to him for awhile, then ride over to the Goo Goo Dolls stage and listen to them for awhile, then we'd ride over to Alice Cooper and listen to him for awhile. It was like an apocalypse in that we basically had the whole place to ourselves. We rang in the New Year, watched the fireworks, and then rode home and went to bed. I don't believe they've had another New Years eve block party since then.
 
I work in the industry and the ones that didn't get "fixed" still didn't have any issues. There was no problem. Sure there was probably an outlier here or there but it was not what it was made out to be. I didn't mind it because we were selling material like hotcakes because everybody was concerned but there was going to be issues all across the board.
Then the ones that didn't get fixed weren't critical. We tested our systems with Y2K dates and got the wrong answers [what materiel was received, disposed of, or moved post 1/1/2000? The reports came up blank]; we didn't fix any system that "rolled over" correctly (and there were a lot of two-digit systems written by folks that knew they'd never need, say, pre-1980 dates, so a 03 was know to be 2003). There were "work-arounds" to many systems, but, hey, there was a "work-around" for MCAS.
 
The last GPS roll over was 1999 and AFAIK, nothing much happened.
 
The reason that Y2K wasn't a disaster was that we fixed the systems that needed to be fixed. I did a couple of large projects for the DoD; they came down to the wire.
have fixed a Y2M but in a Win BIOS date routine, but the real big was the original routine was FUBAR. That was one of the few bugs I got a bonus for doing.
 
vastly overblown - major brokerage

Well, brokerages are risk adverse at the best of times. You can’t make money if you are down, and you can loose a lot of money being down. And if some consumer or industrial sector got it wrong, the market was absolutely going to have a vastly overblown reaction, which would be the absolute worst time for some failure to have slipped through the cracks.
 
The reason that Y2K wasn't a disaster was that we fixed the systems that needed to be fixed. I did a couple of large projects for the DoD; they came down to the wire.

Me too. But the truly scary part of Y2K was the laziness and short-sightedness of programmers that knew damned well how to do it right the first time, yet didn’t.

That part of our genes is not improved today. We don’t learn from our mistakes.
 
Me too. But the truly scary part of Y2K was the laziness and short-sightedness of programmers that knew damned well how to do it right the first time, yet didn’t.

That part of our genes is not improved today. We don’t learn from our mistakes.

Not to say that there are no lazy programmers, but that’s not fair to most of the builders of those systems. Many of those systems were built in mainframe days where systems were only rented from IBM and all the core (yes, core) as well as disk cost real money as you used it. Those systems programmers were never expecting the longevity that occurred and were saving real, run time, money by limiting those fields.

Now that I have gray hair I’ve dug through enough old systems (some that I built) and realized most engineers do what makes sense given the conditions, goals and assumptions in place when they built the system.
 
As is pretending the whole thing was not vastly overblown. The panicky horror stories about elevators and airplanes crashing to the ground, all that nonsense. I worked as a network engineer for a major brokerage at the time, and nearly every engineer and sys admin had to sit there all freaking night despite our repeated assurances to management that NONE of our systems would have the slightest issue. Hell, most of our network equipment weren't even talking to NTP servers back then, they just knew the time since boot. We'd tested the crap out of everything, well in advance. No matter, we had to be fully staffed all night "just in case". THAT was insanity.

I didn’t for a second think the world would end, planes fall from the sky. On the other hand a lot of software was affected and there really was no excuse for such short-sighted programming of code. None at all.
 
Not to say that there are no lazy programmers, but that’s not fair to most of the builders of those systems. Many of those systems were built in mainframe days where systems were only rented from IBM and all the core (yes, core) as well as disk cost real money as you used it. Those systems programmers were never expecting the longevity that occurred and were saving real, run time, money by limiting those fields.

Now that I have gray hair I’ve dug through enough old systems (some that I built) and realized most engineers do what makes sense given the conditions, goals and assumptions in place when they built the system.

Many were not. I worked on Unix systems, that even programmers NEAR to the cutoff we’re still using just two digit years.
 
I used my GNS 530 for an LPV approach today...no issues. Garmin had said their units were not going to be affected.
 
Back
Top