Captive Portal Notes - georgejhunt/iiab GitHub Wiki
Current situation -- Aug 13, 2018
I think captive portal is just too complex for us to find a solution we can all agree upon. How many old devices must we support? Different devices require different strategies. Are trade-offs required? Can we really hope to support them all?
So I propose that we start accumulating tools that will let us sometime later decide which trade offs we are willing to live with. What tools?
Capabilities that may figure in a final solution
Ability to divert queries to vendor URL's to our server -- dnsmasq will accept a file containing URL's and their ip address equivalents.
Apache will also accept a an included list of ServerAlias entries, which can be used to redirect to our python server at port 9090.
Ability to easily change our server (I prefer python - wsgi) to tweak responses.
Ability to record mac addresses of clients we know about.
Ability to let iptables divert all http accesses to our server, if they are not marked (we know the mac address).
Ability to age mac address cache, and restore default behavior after some period.
Ability to respond differently based upon {{ iiab_gateway_enabled }} and actual internet visibility.
Game Plan
Develop a script which puts our server into a recording mode. This mode puts http:// GET request URL's into the file which redirects requests to our captive portal. Then, our server can learn the URL used by new or old operating systems that we do not know about. We will capture all responses to a client's attempt to associate with our AP.
My current thinking suggests that a full dns jail, running in steady state, is counter productive -- because we will never be able to sense that the internet is available.
We can set an iptables rule which redirects all unmarked packets to our server.
When the flag {{ iiab_gateway_enabled }} is false, our server will never set this mark on client mac addresses.
Our server itself will always have it's mac address marked.
We may need to explore different server responses to hijacked http requests -- to see if there is a way to stimulate a fully functional browser -- some OS's bring up a crippled browser client.
One source suggested that whenever a client experiences an ssl failure, there will be a behind the scenes http request to the OS specific (hopefully hijacked) URL. This can to our first line of defense against the "man in the middle" warning.