I'm looking for some insight regarding how best to configure the rules related to memory monitoring of SQL Servers using Foglight.

Adhering to the practice of using the default memory configuration for a SQL Server hosting an OLTP database, a higher than minimum server memory allocation, and the default setting of 2147483647 MB, meaning SQL Server will dynamically allocate memory from the OS as required.....results in the Foglight WindowsAgent almost consistently alerting that there is a memory shortage from the OS perspective. Makes sense since SQL Server consumes everything it can.

My concern is that Foglight presents a picture that there is an impending server issue here due to a perceived shortage of memory. I repeatedly see "Memory Shortage Windows" and "Memory Utilization" alerts in Foglight. Using SQL PI if I look at the memory status for a particular server where this issue is present it shows the following:

Machine RAM (MB): 16,383.55 - Total; 418.00 - Free
SQL Server (MB): 16,140.52 - Total; 16.90 - Free
Page Life Expectancy (d): 6d 06:07:28
Memory Used: 99% (**Critical Alert**)

So I'm curious what the Foglight team recommends for properly configuring monitoring given this scenario? One approach, to keep Foglight happy is to configure a static maximum server memory allocation for this server that always ensures there is a specific amount allocated to the OS. I currently have a PLE of over 6 days so it seems as though I could afford to take this approach. But, is this the right approach to take from the SQL Server best practice point of view?

Another approach is to increase the registry variable thresholds for these memory related alerts (INF_MemoryUseFatal, INF_MemoryUseCritical, INF_MemoryUseWarning) and apply those increased thresholds only to my affected SQL servers. Maybe this is the better option?

If any in the community have any experience/insight they can share that would be greatly appreciated.

Thanks