POA Live Chat Issues

jssaylor2007

Pre-takeoff checklist
Joined
Oct 31, 2011
Messages
121
Location
Muleshoe, TX
Display Name

Display name:
jssaylor2007
Ok SURELY I am not the only person who experiences this.

Sometimes when I click Live Chat it loads for about 15 seconds then says authentication failed. Normally this is no big deal since i just click again and its fine, but sometimes it'll do it for 10 minutes straight, and I just give up.
 
Ok SURELY I am not the only person who experiences this.

Sometimes when I click Live Chat it loads for about 15 seconds then says authentication failed. Normally this is no big deal since i just click again and its fine, but sometimes it'll do it for 10 minutes straight, and I just give up.

Hmm. I have never seen that. Maybe the techies can figure it out.
 
Try refreshing the page you're on that has the LiveChat link before you click it. The chat link is only good until midnight. So if you had a page loaded before midnight, then clicked Live Chat, it would never work period. If you refreshed the page then clicked Live Chat it would work.
 
Try refreshing the page you're on that has the LiveChat link before you click it. The chat link is only good until midnight. So if you had a page loaded before midnight, then clicked Live Chat, it would never work period. If you refreshed the page then clicked Live Chat it would work.

Really? Is the chatroom down after midnight now?
 
The chat link is only good until midnight. So if you had a page loaded before midnight, then clicked Live Chat, it would never work period. If you refreshed the page then clicked Live Chat it would work.

What that means to me, in my ignorant state of all things web board, is that chat "times out" at midnight and can be reinstated from a page that was refreshed AFTER midnight. Of course I could be all wet, but what I DO know is that chat DOES work after midnight.
 
Oh. What did he say....I's confused.
It's how the chat and forum integrate.

Take a look at the URL for the Live Chat link, You'll see the following:
Code:
http://chat.pilotsofamerica.com:8080/?0,2,0,0,0&nn=jesse&pu=http%3A%2F%2Fwww.pilotsofamerica.com%2Fforum%2Fmember.php%3Fu%3D590&au=http%3A%2F%2Fwww.pilotsofamerica.com%2Fforum%2Fcustomavatars%2Favatar590_27.gif&hmac=6e1310840cb46rae4108c7062d38a212&cu=
Notice a few things in there:
nn=jesse
pu=link to my profile page
au=link to my avatar
hmac=hash

So you can see that in the link to log into chat the username is passed directly in the link along with the profile url and avatar url. Now you might wonder, what would stop you from just changing that username value to whatever you'd like and being an admin in chat? That hash will stop you.

That hash is created by hashing a string that is something like string=username+date+secret-code

The chat takes the incoming username and the date and the secret code which it knows as well and hashes that. It then compares that hash to the hash in the URL. If that hash is correct it knows that URL is valid and was generated by the forum software. If you were to give someone your "Live Chat" link they'd be able to log in as you to the chat. But only for a day. Which is why it has the date in it.

This allows the chat to really not be integrated with the forums database it all. It has no idea that vbulletin is integrated with it. All it knows is that it takes a username, profile, and avatar from the URL and that's you. It uses the hash of that plus the secret code to stop people from messing with it.

The vBulletin code knows the secret code so it's able to properly generate that hash which it shoves into the link.

It's not how I'd built it...but it's how *I* had to build it since that's the only way I could log people into the LiveChat software in a way that would be based on their PoA profile but still be secure-ish.

If you remember way back, chuck had setup a chat, that wasn't secure at all. You could just change your cookie to be whichever user you wanted. I did it a little better :)

That's the long winded way of saying that if your link was generated yesterday, it won't work today.
 
It's how the chat and forum integrate.

Take a look at the URL for the Live Chat link, You'll see the following:
Code:
http://chat.pilotsofamerica.com:8080/?0,2,0,0,0&nn=jesse&pu=http%3A%2F%2Fwww.pilotsofamerica.com%2Fforum%2Fmember.php%3Fu%3D590&au=http%3A%2F%2Fwww.pilotsofamerica.com%2Fforum%2Fcustomavatars%2Favatar590_27.gif&hmac=6e1310840cb46rae4108c7062d38a212&cu=
Notice a few things in there:
nn=jesse
pu=link to my profile page
au=link to my avatar
hmac=hash

So you can see that in the link to log into chat the username is passed directly in the link along with the profile url and avatar url. Now you might wonder, what would stop you from just changing that username value to whatever you'd like and being an admin in chat? That hash will stop you.

That hash is created by hashing a string that is something like string=username+date+secret-code

The chat takes the incoming username and the date and the secret code which it knows as well and hashes that. It then compares that hash to the hash in the URL. If that hash is correct it knows that URL is valid and was generated by the forum software. If you were to give someone your "Live Chat" link they'd be able to log in as you to the chat. But only for a day. Which is why it has the date in it.

This allows the chat to really not be integrated with the forums database it all. It has no idea that vbulletin is integrated with it. All it knows is that it takes a username, profile, and avatar from the URL and that's you. It uses the hash of that plus the secret code to stop people from messing with it.

The vBulletin code knows the secret code so it's able to properly generate that hash which it shoves into the link.

It's not how I'd built it...but it's how *I* had to build it since that's the only way I could log people into the LiveChat software in a way that would be based on their PoA profile but still be secure-ish.

If you remember way back, chuck had setup a chat, that wasn't secure at all. You could just change your cookie to be whichever user you wanted. I did it a little better :)

That's the long winded way of saying that if your link was generated yesterday, it won't work today.

Makes sense.
 
The chat takes the incoming username and the date and the secret code which it knows as well and hashes that. It then compares that hash to the hash in the URL. If that hash is correct it knows that URL is valid and was generated by the forum software. If you were to give someone your "Live Chat" link they'd be able to log in as you to the chat. But only for a day. Which is why it has the date in it.

Why not add the timestamp to the URL and include the same timestamp in the hash (to make sure the timestamp wasn't tampered with). Then you could expire the timestamp whenever you wanted, e.g. generation time + 24 hours, instead of by date. It sounds like the existing method would deny you if you loaded the page at 11:59pm and then clicked the chat link two minutes later.
 
Last edited:
Why not add the timestamp to the URL and include the same timestamp in the hash (to make sure the timestamp wasn't tampered with). Then you could expire the timestamp whenever you wanted, e.g. generation time + 24 hours, instead of by date. It sounds like the existing method would deny you if you loaded the page at 11:59pm and then clicked the chat link two minutes later.
That would be better. Actually there'd be lots of better ways to do it. Problem is that you're stuck with what RealChat supports.

http://www.realchat.com/doc/database-integration.html
 
Ah, I see. I thought it was a custom thing you wrote.
I wrote the vBulletin side and the chat user display which caches because their API is slow-ass. I did not write the RealChat side which dictates how it ultimately works.
 
I wrote the vBulletin side and the chat user display which caches because their API is slow-ass. I did not write the RealChat side which dictates how it ultimately works.

I understand. :)
 
I've seen much worse setups on business websites coded by pros who thought they knew it all. ;)

Nice job for a quick and dirty hack that works, man.

I learned this week that one crontab entry and a four line ssh script is handling a hugely important task on our company web farm and has been for 10 years. Heh heh.

Someone suspected that it broke and it was deployed before even our senior-most admin was with the company. We had to go hunt it down and read it.

I added two lines of sanity checks to it to make sure it was never accidentally run as root and left it alone. If it ain't broke...

This time around we documented it, though. ;)
 
I've seen much worse setups on business websites coded by pros who thought they knew it all. ;)

Nice job for a quick and dirty hack that works, man.

I learned this week that one crontab entry and a four line ssh script is handling a hugely important task on our company web farm and has been for 10 years. Heh heh.

Someone suspected that it broke and it was deployed before even our senior-most admin was with the company. We had to go hunt it down and read it.

I added two lines of sanity checks to it to make sure it was never accidentally run as root and left it alone. If it ain't broke...

This time around we documented it, though. ;)
Do you use something like Puppet/Chef/cfengine? I've found that it really takes away a lot of the mystery in situations like the above.

I've been converting our infrastructure to managing the configuration of every server entirely through puppet for about the last year. It just makes sense.
 
Do you use something like Puppet/Chef/cfengine? I've found that it really takes away a lot of the mystery in situations like the above.

I've been converting our infrastructure to managing the configuration of every server entirely through puppet for about the last year. It just makes sense.

Yes, it does. No, they haven't deployed any such tool yet. Yeah, I'm working on it. It's on the rather long clean-up list along with finding some hardware to put a Spacewalk server in for the Production side of things...

There's been some bigger fish to fry, like we're just going to HAVE to rip out the mail server that got built a while ago on DBMail.

The admin that put it in is a good guy, and meant well... (DB replication across sites) but the performance of an almost 300 GB MySQL database with continuous replication turned on to another one, is just god-awful... at least for what I'm used to for a mail server.

The box is a bruiser -- it could handle 10,000 users. It chokes whenever someone tosses a large bulk mail through it.

The meeting where the admins voted in front of management 2:1 to dump it was met with "it gets another try on the new hardware, if it fails -- it's coming out" from the bosses. It's been dogging badly at least once a week, since then. Sounds like the bosses are ready to say, "Dump it."

So now I have to go drag 3000 users worth of mail out of MySQL and put them back on a sane filesystem where they belong, integrate Postfix to LDAP and decide on which IMAP server I want to deal with long-term...

Related to the Puppet thing -- we just had a minor goof of duplicating a bunch of UIDs last week. Ugh. Dumb. Dumb. Dumb. Ticked off PAM/LDAP, I'll tell ya! Kinda insidious to find it at first, too... accounts acted like they had aged out. Learned that aging from both the local login files AND the LDAP objects both have quite different effects too...

Just cleaning house... slowly getting things back to being "conventional". There's at least a year's worth of that on the plate, if I add in building Kickstart and custom scripts to suck things off of the Spacewalk server at build-time, to make every single box load and look the same, like they should. These machines are in farms... and it will only take a few well-tested "templates" and no one will ever have to manually load a machine ever again... they never should have let it get this big without doing that.
 
Yes, it does. No, they haven't deployed any such tool yet. Yeah, I'm working on it. It's on the rather long clean-up list along with finding some hardware to put a Spacewalk server in for the Production side of things...

There's been some bigger fish to fry, like we're just going to HAVE to rip out the mail server that got built a while ago on DBMail.

The admin that put it in is a good guy, and meant well... (DB replication across sites) but the performance of an almost 300 GB MySQL database with continuous replication turned on to another one, is just god-awful... at least for what I'm used to for a mail server.

The box is a bruiser -- it could handle 10,000 users. It chokes whenever someone tosses a large bulk mail through it.

The meeting where the admins voted in front of management 2:1 to dump it was met with "it gets another try on the new hardware, if it fails -- it's coming out" from the bosses. It's been dogging badly at least once a week, since then. Sounds like the bosses are ready to say, "Dump it."

So now I have to go drag 3000 users worth of mail out of MySQL and put them back on a sane filesystem where they belong, integrate Postfix to LDAP and decide on which IMAP server I want to deal with long-term...

Related to the Puppet thing -- we just had a minor goof of duplicating a bunch of UIDs last week. Ugh. Dumb. Dumb. Dumb. Ticked off PAM/LDAP, I'll tell ya! Kinda insidious to find it at first, too... accounts acted like they had aged out. Learned that aging from both the local login files AND the LDAP objects both have quite different effects too...

Just cleaning house... slowly getting things back to being "conventional". There's at least a year's worth of that on the plate, if I add in building Kickstart and custom scripts to suck things off of the Spacewalk server at build-time, to make every single box load and look the same, like they should. These machines are in farms... and it will only take a few well-tested "templates" and no one will ever have to manually load a machine ever again... they never should have let it get this big without doing that.
I have about 25,000 users across thousands of domains totaling terabytes worth of e-mail. It's a completely redundant system with multiple servers.

Postfix for SMTP
Dovecot for POP3/IMAP (Use Dovecot. Very nice)
OpenLDAP for auth
(Admin interface is something I wrote in PHP and honestly works pretty slick)

The incoming spam layer is:
1.) hits postfix
2.) does some checks against spamhaus
3.) if spamhaus passes postfix queues mail and hands off to amavisd-new
4.) amavisd-new has spamassassin loaded into it. It runs the SA rules and then checks against pyzor, razor, and a few others. If messages are quarantined it stores those in mysql

Client spam interface for managing their quarantine and whitelist/blacklist is also something I wrote in PHP. Amavisd-new really helps by providing a mysql store for this part.

The mail store consists of two custom built supermicro servers:
-QTY 2 Seagate Constellation SATA 500 GB drives RAID-1 for OS
-QTY 8 Seagate Constellation SAS 1TB (i think) drives RAID-50(could be wrong) for mail

The two servers are using linux-ha and drbd, so it's a fully redundant mail store that will instantly fail over if one box drops.

POP3/IMAP checks are done against dovecot running on the mail store server.

Incoming mail is a seperate Postfix install that has the mailstore mounted over NFS. Dovecot is the LDA and drops into the mail store.

Dovecot creates a bunch of index files which really helps.

Overage billing / reporting / etc is all through a custom deal I wrote.

The anti-spam layer is pretty resource intensive. I've been playing with tuning this lately. At the end of the day though, it takes some CPU to spamassassin teh volumes of mail we take in. Right now it consists of four servers. I might be able to cut that down soon by beefing them up a lot.

For IMAP/POP3..Use dovecot. No brainer. It works very nice, is fast, has lots of features, is very stable, and the developer(s) respond quickly. Good IRC channel.
 
Last edited:
The incoming spam layer is:
1.) hits postfix
2.) does some checks against spamhaus
3.) if spamhaus passes postfix queues mail and hands off to amavisd-new
4.) amavisd-new has spamassassin loaded into it. It runs the SA rules and then checks against pyzor, razor, and a few others. If messages are quarantined it stores those in mysql

Sounds like the twin bother/sister of a couple of systems I've worked on in the past. One was Exim though. ;) (Not too many folks have run Exim in production, so that was fun...) A friend's setup I did for him had qmail at his request... it was so messy to build him new versions at first, until he scripted all of that... but it was fast. I'll give wacky djb credit on that one.

The two servers are using linux-ha and drbd, so it's a fully redundant mail store that will instantly fail over if one box drops.

Now that's fascinating. You're the first person who wasn't just someone anonymous on the Net that I've actually talked to who's running DRBD in Production. I love the concept, but wondered if it slowed things considerably and wondered a bit about how well it handles failures of network/boxes/whatever...

I assume you're not crazy enough to run DRBD over a WAN, though? ;)

POP3/IMAP checks are done against dovecot running on the mail store server.

Incoming mail is a seperate Postfix install that has the mailstore mounted over NFS. Dovecot is the LDA and drops into the mail store.

Ahh NFS. I may get some push-back on that from the Linux Architect on that one, but there's other ways...

Dovecot creates a bunch of index files which really helps.

I forgot it did that. I was playing in my head with whether or not it would be Dovecot or Cyrus "this time"... both have interesting challenges, but dovecot is pretty sane, all in all.

The anti-spam layer is pretty resource intensive. I've been playing with tuning this lately. At the end of the day though, it takes some CPU to spamassassin teh volumes of mail we take in. Right now it consists of four servers. I might be able to cut that down soon by beefing them up a lot.

Two things working in my favor here... we're blocking all but about six external domains -- the system is intended as "internal only with some access to customers". On top of that, they already pay for an external scanning service for the Exchange farm... adding external filtering to mine wouldn't cost much, I don't think... depends on how they did the pricing with the external vendor.

For IMAP/POP3..Use dovecot. No brainer. It works very nice, is fast, has lots of features, is very stable, and the developer(s) respond quickly. Good IRC channel.

Good to hear. Only other thing I can't do is change the existing LDAP schema very much... unless absolutely necessary. It's very conventional, so that shouldn't be a problem, but typically stuff only gets read-only access to it.

Any persistent data will have to be handled outside of LDAP in our environment. At least it's OpenLDAP and we're not doing any AD integration ("yet" / I suppose that deserves a "yet" since we COULD but no plans to -- different uses).

We also have some PCI and possibly soon some HIPAA requirements that get tossed into this mixing bowl for this recipe too... It'll be fun to put together the engineering docs and flowchart out the new setup (of which there are none of the existing system... grrrrr!)... but in the end, no one in the meetings will care as long as the multiple boxes receive/deliver the mail in a timely fashion, are up to date security-wise, and generally work. Even the multiple-data-center requirement fell off in lieu of stability and the system running well for end-users.

Oh, almost forgot that fun part... all users on it are using webmail today... RoundCube... decent software, but they're finally talking about rolling out a desktop client for some/all of the users on this system. A bit of testing and keeping the front-end http server totally separated from the mail environment only makes sense... but that's not how it's set up today... (sigh!)...

Fun being back into full-time sysadmin. Lots of design/choices to make, fairly autonomously if you know your stuff. Usually one meeting is all it takes to turn someone loose on reconstruction of major systems architecture. Nice to be at a small company. But getting that meeting called... can take a while. :)
 
Sounds like the twin bother/sister of a couple of systems I've worked on in the past. One was Exim though. ;) (Not too many folks have run Exim in production, so that was fun...) A friend's setup I did for him had qmail at his request... it was so messy to build him new versions at first, until he scripted all of that... but it was fast. I'll give wacky djb credit on that one.
I replaced qmail with this. I was never impressed by qmail. His ridiculous policy caused it to be left in the dust and it was a major PITA to work with because of all the third-party patches you had to apply. That plus any mail server that needs some parent process to restart the mail server every-time it crashes is crap imo. Nothing like watching it go bizerk and start ten million of them.
Now that's fascinating. You're the first person who wasn't just someone anonymous on the Net that I've actually talked to who's running DRBD in Production. I love the concept, but wondered if it slowed things considerably and wondered a bit about how well it handles failures of network/boxes/whatever...
I also use it on all our production mysql instances. I have a pretty decent setup that I've tested thoroughly. If hardware fails - the secondary server will have taken over and will be running the database again before I'll know it happened. (10-30 seconds, could be shorter if one desired).

The performance really just depends. Obviously it's only going to be as fast as your network link between the two boxes. So if you have a lot of IO than gigabit may not be enough.

I've done a lot of benchmarks.If your secondary can keep up with your primary (same hardware) and the link is greater than your IO the performance impact is negligible.
denverpilot said:
I assume you're not crazy enough to run DRBD over a WAN, though? ;)
I've tried it. It does work. The problem though, is that the IO performance on your primary is going to be limited by the WAN link. So it really isn't acceptable for most tasks.

denverpilot said:
Ahh NFS. I may get some push-back on that from the Linux Architect on that one, but there's other ways...
At this point I only use it for the incoming mail that has already been spam filtered and just needs to be dropped into the mail directories with the Dovecot LDA. We're talking very minimal amounts of IO. It really made sense though because it allowed me to support multiple storage back-ends.
denverpilot said:
Fun being back into full-time sysadmin. Lots of design/choices to make, fairly autonomously if you know your stuff. Usually one meeting is all it takes to turn someone loose on reconstruction of major systems architecture. Nice to be at a small company. But getting that meeting called... can take a while. :)
Agree. The agility of operations in small companies is quite nice. We were using bind and zone files as text files for years. Our support team would send cases to our IT team to make changes. One Friday, around 4pm, I got sick of that. I found PowerDNS which looked nice and supports a mysql backend. Then I wrote a management interface for our support team in PHP and migration scripts to move from bind. Come Monday we had DNS our support team could manage themselves and all our clients migrated onto it (thousands of zones). Try that in a big company. Quite nice for me, no longer did I have to spend a hour everyday making DNS changes.
 
Back
Top