[NCLUG] Egress Filtering

John L. Bass jbass at dmsd.com
Tue Aug 14 18:32:31 MDT 2001


	You brought up KISS -- isn't this where KISS makes sense?  You have
	thousands of irresponsible (= unaware of RFC) hosts.  You can attempt to
	get each cable modem user and DSL user to correctly set up their modem.


99% of the cable and dsl users either have the provider set up the modem, or
follow the providers cook book. The shear number hosts on the network of this
class is what makes these attacks profitable. In these boxes it truely is "one rule".

Ar the ISP end it can easily be hundreds of rules in dozens of routers
for a multi-homed ISP using ATM/BGP dynamic routing. A single typo can take down
large sections of their net, and take significant time to find.

Complexity is an exponential problem, and has to be weighed against the costs.
Here we have to look at the cost of maintaining a single filter rule in a stock
configuration on many customer routers, vs. the cost of maintaining a larger
number of filters (and their interactions in a dynamic environment) where mistakes
can have a huge impact.

So is KISS a single stock config/filter rule for the customers gateway? or is KISS trying
to coordinate the maintainence of many rules in several/many routers at the ISP where
the impact of a mistake can cripple a significant portion of a facility?

IF you want to really slide down the slope, the cable/dsl customer routers typically
have enough performance to filter/scan packets/streams for known trojans and viruses
too. Something that is just impossible to do at an ISP's core gateway router/switch.

	I don't know about that.  Maybe 'mandate' is a strong word.  After all,
	this is the Internet we're talking about.  The best we can do is
	suggest, through the RFC processes. -- And we already do that.  This
	document is a "Best Current Practice" document describing how routers
	should filter: ftp://ftp.isi.edu/in-notes/bcp/bcp38.txt

Notice that this talks about Ingress filtering from each customers portal. Isn't
the other end of that stick placing the filter at the customers end of the
portal and requiring/doing customer based egress filtering?

Notice also the discussion of ingress filtering at the ISP doesn't stop the abuse
via other hosts on the same local subnet when the primary aggregation at the local
end is a switched/shared subnet. However, placing the filters in the customers
box, prevents viruses and trojans from using other hosts on the ISP's subnet
from amplifing the attack.


There are a number of RFC issues that make a lot of sense, but have been left by
the wayside due to changing technical limitiations. We live with high packet loss
on major sections of the internet today because router based flow controls were
abandoned for performance reasons that resulting in switching rather than routing.

Again, this is a political problem ... responsibility lies with the packet source,
unless they out source that responsibility to a 3rd party, like their ISP as part
of a service agreement. And in some ways it makes sense for ISP's to mandate compliance,
as part of the Acceptable Use Policy.

	> I believe that everyone is responsible for the devices they directly
	manage/own,
	> and no one else. A customer might choose to outsource part or all of
	that obligation
	> to a contractor, which might be their ISP, for a fee or bundled into
	another service.
	> There are a number of ISP's which manage selected customer networks.

	This is what has me torn about this.  I would normally agree with you,
	simply because if I let @Home filter some traffic, what is to stop them
	from, say, filtering other ports?  Say... port 80?  Uh.. bad
	example...nevermind.

@Home does this to enforce their pricing model - no servers. They do not
filter for business services where provided. They can do that with one rule
at the head end of the subnet.


	> I assert that it's infeasible to hold an ISP accountable for finding a
	technical
	> solution for every form of evil packet stream a customers network
	might be
	> able to inject into the network.

	It is downright rude -- especially when it actually IS feasible.  When I

If you are saying this statement is downright rude - reality check time.

I went out of my way to reference packet flooding and virus/trojans as evil
packet forms - where it's is highly unrealistic to require that the ISP provide
filtering/scanning of packets and encapsulated mail and other documents streams.

The ISP head end egress filter proposal solves NONE these critical and basic issues
which are the root cause of the symptoms you are addressing.

	DDoS is the perfect crime, and I can't get people to do anything about

The perfect crime is trojans/viruses.

DDoS is the symptom, the problem IS trojan/viruses - to really fix the problem
pay attention to the root cause of these attacks. Go after software vendors that
refuse to use defensive API's in their products which repeatedly propagate these
agents. Leave the ISP's alone - they did not create the problem, they should not
bear the cost of containing it. Send the bill to Microsoft if you can. Any other
solution is a half-witted attempt to control the problem.

The goal of these agents is to aquire enough unsecure machines to do their bidding.
Code Red clearly made that point - and nothing outlined in the filtering proposal
even attempt to stop the spread of that agent, or the impact agents of that class
can have.

	> I stated that this is a slippery slope, and once you start down this
	path,
	> it is difficult to stop and is almost certainly going to hit solid
	technical
	> barriers.

	Yeah... But people need to think about it, at the very least.  From the
	massive volumes of traffic this thread created, it would appear that I
	at least made this /list/ think about it.

But please at least frame the discussion starting with the problem agent,
and not the symptoms. Fixing symptoms is an endless game, change the rules,
and they change the play of the attack. Just work on ending the game itself.

John



More information about the NCLUG mailing list