• This Forum is for adults 18 years of age or over. By continuing to use this Forum you are confirming that you are 18 or older. No content shall be viewed by any person under 18 in California.

New Primer Notification Tool

I'm not out, yet. The next time I get a text from someone telling me XYZ has primers in stock, I will be. I can't speak for everyone on this forum, but there aren't a lot of second chances with most of the folks I shoot with.
 
Two suggestions, Storm...

1. Set up a temporary web site, emulating one of the retail providers you're scraping, and use that to fully test your code. Depending upon actual web scrapes, from the actual retail vendors, happens far too infrequently given the paucity of product.

2. Reset the monthly invoice clock for all subscribers each time there is a missed ping, until such time there are no more misses. Don't invoice again until every subscriber has gotten a full month of a healthy system.

Hope you get it sorted.
This sounds right to me.
From the various errors so far, the product is still in 'alpha testing'.
 
Let's see. he's an out of work welder, living in New Orleans, probably in the not so green energy industry and now learning to code.
m1708.gif


Personally I think he's a communist spy planted to further interrupt patriots from getting much needed supplies.;)

Good luck Storm, this can be a tough crowd
 
@Storm_ If you could write a program to block the other bots
out and make yours first, I'd pay for $$$$ for that.
Brownell's sent out a message for primers this morning. I was
on and put them in my card within 5 minutes. Didn't get any.
You know these a$$holes are just selling them on GB. If Brownell's
Midway, P.V. and the others would put a 1000 primer limit per purchase
it would help out a lot.
I just e-mailed them and told them this. Other should do the same.
 
Last edited:
@Storm_ If you could write a program to block the other bots
out and make yours first, I'd pay for $$$$ for that.
Brownell's sent out a message for primers this morning. I was
on and put them in my card within 5 minutes. Didn't get any.
You know these a$$holes are just selling them on GB. If Brownell's
Midway, P.V. and the others would put a 1000 primer limit per purchase
it would help out a lot.
Hazmat and shipping would bring 1 brick to $100. Maybe limit it to 2000 or 3000 though. I think the limit is currently 5000
 
This is very interesting. Unfortunately this will never work as currently described. Now before thinking me negative let me give you a little background. I program in 12 languages and have been creating code for Fortune 500 companies for over 20 years. Yep, created my first fully coded website in 1998. Helped build the largest ecommerce website in the world, a name you will all know, before slowing down to enjoy shooting and helping my local communities.

Why won't this work? Has absolutely nothing to do with the coding being created and perfected. Every major ecommerce website in the world is now using delayed pathways as a security precaution to discourage brute force attacks by hackers. Hackers who would and could continuously attack a single ecommerce website like Basspro.com, Kohls.com, the entire Shopify ecommece platform, Ebay and even Amazon. These attacks were only stopped previously when ransoms WERE paid. This included EVERY transportation website like Expedia, Travelocity, Kayak, Hotwire, and many, many more.

Expedia and Travelocity for example are names you know and have created & use the same type of "just in time code" to deliver airfare quotes as this code would deliver just in time primer availability. So if it's the same type of code why won't it work?

Good question and here's the simple answer. ALL ecommerce websites hosted by AWS (Amazon Web Services and the biggest player by far) and every major hosting network in at least the US now purposely delay server display information. As the consumer you will never notice it. However, this delay gives AWS' and other security software the milliseconds it needs to determine a brute force attack and then stop it cold! Before the attack can cripple the account the computer feed is switched. This happens extremely fast.

Scraping websites is how the hackers and bad apples would "steal" realtime, published online content to then display on their bogus ecommerce websites. This included travel websites which relied 100% on realtime data in an extremely competitive marketplace.

People would then use these bogus websites and give out their financial data. You have all heard of this happening. It used to be almost every day. Now you rarely here about it happening. Why? Because when scraping a website in real time was blocked the hackers moved on. It's hard to sell tickets that are already gone. Most people always had two screens open at the time to buy airline tickets. Now hackers simply try to and often succeed in attacking data storage networks instead of ecommerce websites publishing delayed content.

You have probably experienced this delay yourself. Every time you try to buy powder or primers that appear to be in stock and then, right before your eyes, they disappear. Your items were actually gone anywhere from 2-10 minutes before you tried to check out. Your browser cached the ecommerce page when you first viewed it and an updated page was not served and shown until you changed the screen.

Scraping a website or even many websites is 15 year old technology and no longer relevant. For this to work the people involved will need to negotiate direct, real time data feeds with any and all stores/vendors/ecommerce websites they hope to show primer and any future content from. This will require negotiating licensing, profit sharing, and copyright releases.

Oh, I forgot. Scraping website content for economic gain without permission violates US copyright law and is subject to a $10k fine per instance. This DOES NOT fall under the internet Fair Use law. Saying I am sorry, I'll remove all of your content, and never do it again doesn't work in real life. As soon as you scrape that content without permission and use it in public (posting it on your website) you are now immediately liable for copyright infringement. Just as if you just posted an unlicensed picture of Mickey Mouse. Your public posting will now also be recorded almost immediately on the https://archive.org/web/.

This website is where you can find billions of web pages even when they have been deleted and is used by court experts every day. Guess who created this many years ago? Jeff Bezos. Why? Because Amazon is the #1 legal entity enforcing copyright laws since Amazon content is "used" more than any other content in the world. All of those major websites using AWS are Amazon clients!

A great idea. Just need a different way to execute it!
 
This is very interesting. Unfortunately this will never work as currently described. Now before thinking me negative let me give you a little background. I program in 12 languages and have been creating code for Fortune 500 companies for over 20 years. Yep, created my first fully coded website in 1998. Helped build the largest ecommerce website in the world, a name you will all know, before slowing down to enjoy shooting and helping my local communities.

Why won't this work? Has absolutely nothing to do with the coding being created and perfected. Every major ecommerce website in the world is now using delayed pathways as a security precaution to discourage brute force attacks by hackers. Hackers who would and could continuously attack a single ecommerce website like Basspro.com, Kohls.com, the entire Shopify ecommece platform, Ebay and even Amazon. These attacks were only stopped previously when ransoms WERE paid. This included EVERY transportation website like Expedia, Travelocity, Kayak, Hotwire, and many, many more.

Expedia and Travelocity for example are names you know and have created & use the same type of "just in time code" to deliver airfare quotes as this code would deliver just in time primer availability. So if it's the same type of code why won't it work?

Good question and here's the simple answer. ALL ecommerce websites hosted by AWS (Amazon Web Services and the biggest player by far) and every major hosting network in at least the US now purposely delay server display information. As the consumer you will never notice it. However, this delay gives AWS' and other security software the milliseconds it needs to determine a brute force attack and then stop it cold! Before the attack can cripple the account the computer feed is switched. This happens extremely fast.

Scraping websites is how the hackers and bad apples would "steal" realtime, published online content to then display on their bogus ecommerce websites. This included travel websites which relied 100% on realtime data in an extremely competitive marketplace.

People would then use these bogus websites and give out their financial data. You have all heard of this happening. It used to be almost every day. Now you rarely here about it happening. Why? Because when scraping a website in real time was blocked the hackers moved on. It's hard to sell tickets that are already gone. Most people always had two screens open at the time to buy airline tickets. Now hackers simply try to and often succeed in attacking data storage networks instead of ecommerce websites publishing delayed content.

You have probably experienced this delay yourself. Every time you try to buy powder or primers that appear to be in stock and then, right before your eyes, they disappear. Your items were actually gone anywhere from 2-10 minutes before you tried to check out. Your browser cached the ecommerce page when you first viewed it and an updated page was not served and shown until you changed the screen.

Scraping a website or even many websites is 15 year old technology and no longer relevant. For this to work the people involved will need to negotiate direct, real time data feeds with any and all stores/vendors/ecommerce websites they hope to show primer and any future content from. This will require negotiating licensing, profit sharing, and copyright releases.

Oh, I forgot. Scraping website content for economic gain without permission violates US copyright law and is subject to a $10k fine per instance. This DOES NOT fall under the internet Fair Use law. Saying I am sorry, I'll remove all of your content, and never do it again doesn't work in real life. As soon as you scrape that content without permission and use it in public (posting it on your website) you are now immediately liable for copyright infringement. Just as if you just posted an unlicensed picture of Mickey Mouse. Your public posting will now also be recorded almost immediately on the https://archive.org/web/.

This website is where you can find billions of web pages even when they have been deleted and is used by court experts every day. Guess who created this many years ago? Jeff Bezos. Why? Because Amazon is the #1 legal entity enforcing copyright laws since Amazon content is "used" more than any other content in the world. All of those major websites using AWS are Amazon clients!

A great idea. Just need a different way to execute it!
With all due respect, this is incorrect information.

Regarding "delays":

The example used of seeing a product in stock then out of stock when added to the cart is due to caching. Websites fully render the code for a page, and refresh it every so often.

This saves the server from having to work to bundle up all the CSS, HTML, and JS each time a page is loaded, and instead, send a "pre-rendered" page. This is on the server side.

The browser does cache items, but properly designed and working websites break the cache when dynamic content is updated.

Additionally, browser caches are held by browsers that view a site multiple times. When browsing programmatically, simply using a get request, there is no client side caching.

Non-technical jargon example:
An example of caching is like when you are working on a big project and you have a cheat sheet, you use that cheat sheet instead of looking up each piece of info each time because it is faster. But every so often you have to update your cheat sheet. A cache works the same way. The delay you perceive is you getting an old cheat sheet, but he the info on that sheet is now wrong.

"Scraping is irrelevant":

"Scraping" a website, is not a 15 year old technology, scraping just means programmatically collecting the website, and then using code to parse and break it apart into useful information.

It doesn't become irrelevant, its just a name for programmatically collecting data from a website.

Non-technical jargon example:
This is the equivalent of saying fishing is irrelevant, and that doesn't make much sense because fishing is an action. That is correct, "scraping" is a just an action that can be done to anything that stores data.

Just like you can fish in a lake for carp, or build a barn, you can scrape a database of books for their names, or scrape a website for the stock status.

"Scraping is illegal":

I don't care to discuss legal in depth on the internet, but there are many cases surrounding this.

From my analysis and my councils, PrimerPrimer is perfectly fine.

LinkedIn set some great precedent for scraping the web.
See: https://parsers.me/appeal-from-the-...ourt-for-the-northern-district-of-california/

I don't have a non technical example for this as it could be interpreted as legal advise, and I am not a lawyer.


Edit: adding more non technical examples for better understanding
 
Last edited:
With all due respect, this is incorrect information.

Regarding "delays":

The example used of seeing a product in stock then out of stock when added to the cart is due to caching. Websites fully render the code for a page, and refresh it every so often.

This saves the server from having to work to bundle up all the CSS, HTML, and JS each time a page is loaded, and instead, send a "pre-rendered" page. This is on the server side.

The browser does cache items, but properly designed and working websites break the cache when dynamic content is updated.

Additionally, browser caches are held by browsers that view a site multiple times. When browsing programmatically, simply using a get request, there is no client side caching.

"Scraping is irrelevant":

"Scraping" a website, is not a 15 year old technology, scraping just means programmatically collecting the website, and then using code to parse and break it apart into useful information.

It doesn't become irrelevant, its just a name for programmatically collecting data from a website.

"Scraping is illegal":

I don't care to discuss legal in depth on the internet, but there are many cases surrounding this.

From my analysis and my councils, PrimerPrimer is perfectly fine.

LinkedIn set some great precedent for scraping the web.
See: https://parsers.me/appeal-from-the-...ourt-for-the-northern-district-of-california/
I'm not a programmer but have worked as a manager and project manager in application/software development for over 20 years. From my experience, what Storm is saying is correct.
 
With all due respect, this is incorrect information.

Regarding "delays":

The example used of seeing a product in stock then out of stock when added to the cart is due to caching. Websites fully render the code for a page, and refresh it every so often.

This saves the server from having to work to bundle up all the CSS, HTML, and JS each time a page is loaded, and instead, send a "pre-rendered" page. This is on the server side.

The browser does cache items, but properly designed and working websites break the cache when dynamic content is updated.

Additionally, browser caches are held by browsers that view a site multiple times. When browsing programmatically, simply using a get request, there is no client side caching.

"Scraping is irrelevant":

"Scraping" a website, is not a 15 year old technology, scraping just means programmatically collecting the website, and then using code to parse and break it apart into useful information.

It doesn't become irrelevant, its just a name for programmatically collecting data from a website.

"Scraping is illegal":

I don't care to discuss legal in depth on the internet, but there are many cases surrounding this.

From my analysis and my councils, PrimerPrimer is perfectly fine.

LinkedIn set some great precedent for scraping the web.
See: https://parsers.me/appeal-from-the-...ourt-for-the-northern-district-of-california/
First let me again state that your business idea is awesome! Brilliant even! It truly is.

So logic would dictate that since it is an awesome idea with an attentive and waiting audience why hasn't someone else already done this? Why hasn't someone like AmmoSeek, a completely free service, invested in new technology platforms, built better database integration with partner companies, and done this already? Laziness? There are a ton of brilliant software developers out there. The technological know how has existed FOR YEARS!

Travelocity, and the hundreds of travel related clones since created, proved that technology can be quickly duplicated. So technology wasn't the reason hundreds of commercial travel sites didn't pop up overnight. There must have been a reason not related to unique software or technology in general. Well there was a huge stumbling and legal road block called American Airlines. American Airlines owned and provided ALL of the airlines with flight & reservation data through its proprietary Sabre reservation system. Every airline in the world had to license with Sabre to use this data.
(1996: Travelocity.com is launched as a joint venture of Sabre Interactive and Worldview Systems Corp.)

Delays

Actually my information is 100% correct. So are your definitions provided as to how the Internet works for displaying web pages from the server side through to the browser side. No dispute from me at all. You've described exactly how server side and browser caching work and why they are used.

However, I wasn't describing caching at all. I was describing patented, security based server content controls using proprietary software to deliver database controlled content for specific timed delivery. This patented technology has absolutely nothing to do with server side or browser caching. It does have everything to do with website security and server load. Specifically combating bots. Bots don't just crash a single category or website page. They dramatically affect the entire consumer experience and are costing retailers millions a year now. What I am describing happens before the code is delivered to the server for rendering which you have perfectly defined.

I am not knocking you or your ideas although I can certainly understand how my comments must appear to anyone wanting to protect their business and business idea. The patented controls I mention above were designed to combat the very real problem you wrote about and acknowledged in your own post below referring to scraping bots:

"This is actually a major problem. We usually send a text 1-2 minutes before sites start getting slow then crash. This is all from other bots too. It gotten pretty wild."

"Scraping is irrelevant":


Again you have defined website scraping perfectly: its just a name for programmatically collecting data from a website.

What makes it irrelevant is how you have described your use of it.

"Scraping is illegal":

As you have described your process and planned use of this data collection technique it is illegal. You have offered the entirely wrong court case (which is still pending by the way) for your business model. Why is this? The court case you present is in regards to whether a 3rd party can "scrape" the LinkedIn website to gather the PUBLIC PROFILES of LinkedIn members.

The key wording is PUBLIC PROFILES. LinkedIn like Facebook are platforms. Even Google is a platform. They did not create nor do they own their member's content. Exactly like this website. It's 100% created by members and members agree with each website's respective terms of service that ALL of their created content is available publicly to anyone. There is no expectation of privacy and no copyright protection as it pertains to public profile information use on each website. Each website can and does freely sell or share this member created profile information with advertisers and other platform users.

Midsouth Shooters Supply, Basspro, Midway USA, Cabelas, and Scheels are all content publishers and not PUBLIC entities. They are exactly like US News, Fox News, The Federalist, and millions more. Their published online content i.e. all text, charts, images, product pricing, and product availability, is not in the public domain for commercial reuse. It is simply available for viewing by the public in general. Their ecommerce websites are simply extensions of their corporate identities and their entire website content is covered by US copyright law. Period. No one, and this includes your current business model, can independently "scrape" content from any online publisher's website, blog, etc. for commercial reuse without the express written permission to do so. Regardless of content manipulation techniques used nor how re-displayed in the future any 3rd party commercial use of copyrighted materials, including online published material, is illegal. Fair use does not apply per multiple court rulings.

You have a great idea! Done correctly I would sign up in a minute flat online. I am sure you will keep pursuing it. Perhaps creating an active publishing partnership with the same companies that you wish to currently "scrape" their data. With direct access to their live inventory databases you wouldn't need to "scrape" anything and it could become an instant, ACCURATE, clearing house for a huge market that is inefficient for these companies to pursue on their own.

Costco doesn't make their money selling products in their stores. Their real profit is in selling memberships! Accurate, live inventory sells memberships.

I only took the time to comment originally due to my interest in reloading, shooting, and my lifetime support for entrepreneurs.

My opinions and comments are just that, opinions and comments. However, I have also spent many years in court working for ecommerce businesses and their lawyers.

I would suggest only that "scraping" websites using bots is not the way to build your business if you desire long term growth. I know this for a fact. Currently three major, exclusively online retailers, with the help of the DOJ Intellectual Property Division, have successfully filed lawsuits against two companies utilizing bots to crawl their ecommerce websites. The DOJ is utilizing technology which unequivocally proves who owned the bots. Advanced IP and tracing software that doesn't care how you try to filter the attack. They are seeking millions in monetary damages for lost revenue due to the website attacks (these laws have been on the books since 2001 when Clinton signed them into law. Edit: Law initiated under Clinton but actually signed by Bush). And the DOJ is enforcing federal copyright law to further discourage future bot use. Copyright law alone will generate millions in fines.


Bots have also been successfully categorized as denial of service attacks DoS
"In this form of hacking, an intruder floods the system or servers with traffic (bots are considered traffic), denying access to legitimate users. Florida penalizes this more severely, categorizing this crime as a felony in the first degree."

My post began and remains not as an attempt to disparage or insult you or your idea. Just an opportunity to share some ideas. I thought what is written below was great which is the only reason I took so much of my time to comment, I also understand that the people on the forum are a great initial market test:
-----------------------------------------------------------------------------------------------
"How it started

A few months ago my buddies and I began laying the foundation for starting a commercial reloading operation. Quickly, we realized how bad the primer shortage had become.

After a bit of digging, many hours on hold, and chatting with a lot of different companies I realized something interesting.

While significantly less, the supply of primers to retails has not stopped, and many weeks 1 or 2 retailers will actually get primers in-stock online. (But you do have to be quick, they sell out in minutes)

So, we got to work. We built some code that will check about 23 different retailers at a very fast interval and text us a link when they are in-stock.

Over the last 30 days of using our system we were able to get enough primers to not worry about proceeding with our little commercial reloading operations."
-----------------------------------------------------------------------------------------


I will not respond again. Good luck to you and I wish you success!
 
Last edited:
How is your service different from Distill?
How do you get around the wishlist trick at Natchtez? Where you have to see the product or have the product on your wishlist to see in stock?
How much faster are your texts than discord?
How long does it take your system to send all of the texts?
 
I hear from a reliable source that a computer from Jamaica was used ;) to test this theory recently on bullets and powders but it works the same way for primers, the same guy using the computer in Jamaica has received numerous “Pings” about 5-10 minutes before the actual site notifications went out giving a head start to the guy in Jamaica that needed a few things he was low on. For anyone that questions if this works my buddy ;) has assured me it does as he was needing some H1000.
If it’s legal I really can’t say since their are so many gray areas revolving around this type of thing, if the general population knew what was actually capable of being done from the very device your currently reading this with they would be scared to death lol.
 

Upgrades & Donations

This Forum's expenses are primarily paid by member contributions. You can upgrade your Forum membership in seconds. Gold and Silver members get unlimited FREE classifieds for one year. Gold members can upload custom avatars.


Click Upgrade Membership Button ABOVE to get Gold or Silver Status.

You can also donate any amount, large or small, with the button below. Include your Forum Name in the PayPal Notes field.


To DONATE by CHECK, or make a recurring donation, CLICK HERE to learn how.

Forum statistics

Threads
165,202
Messages
2,191,364
Members
78,745
Latest member
kass
Back
Top