NetExtender doesn't pass traffic on most connection attempts
I have Sonicwall TZ appliances in each of four locations, as summarized below. All nodes are connected to each other using IPSec Tunnels. All four nodes are configured identically; the only differences are in the relative subnet addresses.
Location 1, 10.1.x.x subnet, TZ350, 126.96.36.199-53n
Location 2, 10.2.x.x subnet, TZ300W, 188.8.131.52-79n (recently HotFixed *)
Location 3, 10.3.x.x subnet, TZ350W, 184.108.40.206-79n
Location 4, 10.4.x.x subnet, TZ300w, 220.127.116.11-53n
Ever since I upgraded Location 2 and Location 3 to 18.104.22.168-79n, my users have had trouble acquiring a proper connection to those appliances using NetExtender from their Windows 10 laptops.
When using NetExtender, the end users usually have to try several times before they get a successful connection. When it fails (~80% of the time), the user does authenticate properly, and they get an IP address in the proper range, but no traffic is passed between the SSLVPN user and the LAN. Each time that it fails, the logs report information like this:
destination for 10.2.0.3 is not allowed by access control [this is my dns server]
destination for 22.214.171.124 is not allowed by access control
destination for 10.1.0.3 is not allowed by access control [this is my secondary dns server]
destination for 126.96.36.199 is not allowed by access control
destination for 188.8.131.52 is not allowed by access control
destination for 255.255.255.255 is not allowed by access control
destination for 184.108.40.206 is not allowed by access control
After about a minute of this failed traffic, the NetExtender client disconnects. Then the user repeats the process 4-5 times until they connect and are able to pass traffic. When the connection is successful, no such firewall errors are reported and the user can remain connected, passing traffic, all day.
Initially, the users also had trouble maintaining a connection using MobileConnect, but that error appears to have been resolved with a recent hotfix (*) that was supplied by support. This hotfix, however, did not solve the problem with NetExtender. While I'm happy and thankful that support was able to fix the issue with MobileConnect, I'd prefer for the users to use NetExtender for a variety of reasons, so I'm eager to get it working properly. Can anybody lend any insight as to what I might be able to do to prevent the errors outlined above, and thus allowing traffic 100% of the time that users connect?
The IP address range that I use for SSLVPN users does not overlap any other networks or ranges on any other interfaces on the Sonicwall.
The configuration didn't change after upgrading from 220.127.116.11-53n to 18.104.22.168-79n. Connections were always successful on 22.214.171.124-53n.
I foolishly assumed that I had a good cloud backup of my configuration on 126.96.36.199-53n, but those cloud backups didn't actually get saved.
We have tried numerous NetExtender versions, including 188.8.131.524, 10.0.0.297, and 10.2.0.300.
Packet captures indicate the same error as the regular logs, which is to say that it suggests that I have a rule in place prohibiting traffic. As far as I can tell, I do not have any such rule. The fact that the users do eventually get a connection that is able to pass traffic seems to support this.
Anyway, I welcome any and all feedback. Thanks in advance.
Have you tried creating an ACL from SSLVPN zone to LAN zone allowing everything?
This is supposed to be a default rule created by the firewall itself. But you can try and create a new one. Remember to put in top.
Yes, I've created "allow all" rules for SSLVPN to LAN, and for LAN to SSLVPN. The fact that MobileConnect always works, and NetExtender works fine after several attempts seems to suggest that the issue isn't related to an access rule.
As previously noted, users have to attempt to connect with NetExtender several times before they're successful. Once they're successful, though, then the connection is solid for the entire day.
Only the unsuccessful attempts lead to the log issues that I noted in the original post.
I recently noticed a similar conversation in the sonicwall Reddit. At least a couple of other people are experiencing the same thing.
You can always open a case to Support.
Yeah, I've had a case open with support for a few weeks. I was just hoping that somebody here would have experienced and solved the same issue already.
I know this is an old item, but I am now experiencing the same problem.
Have you already found a solution?
I opened the ticket in July 2020, but it was never resolved. Since my original post, I eventually had to upgrade the firmware on my remaining two appliances since there was a security concern with 184.108.40.206. As soon as they were upgraded to 220.127.116.11, my users started complaining that they had to try several times before they could get a successful connection. The logs showed the same errors that my previously upgraded appliances had been reporting, as documented in the original post.
Between July 2020 and March 2021, I'd get a call from support every few weeks. They would say that engineering wanted them to check a few settings. It became clear that they must have thought that I had a firewall rule which was prohibiting traffic because this is what they would check virtually every time.
I tried very hard to explain that it couldn't have been caused by a firewall because (1) I didn't change configuration and it only started misbehaving immediately after 18.104.22.168 or greater was installed on all four appliances, (2) after a few tries, a user could get a stable connection that would pass traffic all day, and (3) all connections using mobileconnect functioned properly.
In March 2021, I decided to cut my losses and I went with another brand. It became clear that no matter how many times engineering would instruct support to check for a firewall rule that could have been blocking traffic, they were never going to find one because it didn't exist.
I appreciated my users' patience with me, and I just decided that it was time to appease them.
I was getting (sort of) the same issue. I was seeing the "destination for X.X.X.X is not allowed by access control" in my logs. Which x.x.x.x was the DNS addresses listed on the SSLVPN -> Client Settings -> Default Device Profile configuration -> Client Settings.
So, I changed those and the new DNS addresses started appearing in the errors in the logs, so I knew I was on the right track. I checked all the ACLs. Nothing was even remotely blocking the traffic. I added some ACLs to try to force Allow the traffic, even though there were no implicit denies. Nothing worked, the messages still appeared in the logs.
Somehow I stumbled on to the local user and group settings. Under Users -> Local Users & Groups there are three sections. Local Users, Local Groups, and Settings. On Local Users go in to the SSL VPN user(s) configuration, and add the subnet in question to the VPN Access tab. Then go to the Local Groups section, open up the configuration for the group that the SSL VPN user(s) are joined to and update their VPN Access section as well.
That makes 3 total locations to allow access to specific networks/address objects. Once I added all my necessary subnets to those three sections, the messages went away completely, and my packet monitor started showing those packets being forwarded instead of dropped.
I know this is old, but hopefully this is helpful to someone. I spent 2 days on this!