Thursday, May 28, 2015

How large is my ESXi core dump partition?

Today I have been asked to check the core dump size on ESXi 5.1 host because this particular ESXi experienced PSOD (Purple Screen of Death) with a message that the core dump was not saved completely because out of space.

To be honest, it took me some time to find the way how to find core dump partition size therefore I documented here.

All commands and outputs are from my home lab where I have ESXi 6 booted from USB but the principle should be the same.

To run these commands you have to log in to ESXi shell for example over SSH or ESXi troubleshooting console.

First step is to get information on what disk partition is used for the core dump.
 [root@esx01:~] esxcli system coredump partition get  Active: mpx.vmhba32:C0:T0:L0:9  
   Configured: mpx.vmhba32:C0:T0:L0:9  
Now we know that core dump is configured on disk mpx.vmhba32:C0:T0:L0 partition 9.

Second step is to list disks and disks partitions together with sizes.
 [root@esx01:~] ls -lh /dev/disks/total 241892188  
 -rw-------  1 root   root    3.7G May 28 11:25 mpx.vmhba32:C0:T0:L0  
 -rw-------  1 root   root    4.0M May 28 11:25 mpx.vmhba32:C0:T0:L0:1  
 -rw-------  1 root   root   250.0M May 28 11:25 mpx.vmhba32:C0:T0:L0:5  
 -rw-------  1 root   root   250.0M May 28 11:25 mpx.vmhba32:C0:T0:L0:6  
 -rw-------  1 root   root   110.0M May 28 11:25 mpx.vmhba32:C0:T0:L0:7  
 -rw-------  1 root   root   286.0M May 28 11:25 mpx.vmhba32:C0:T0:L0:8  
 -rw-------  1 root   root    2.5G May 28 11:25 mpx.vmhba32:C0:T0:L0:9  
You can get the same information by partedUtil.
[root@esx01:~] partedUtil get /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0:9326 255 63 5242880
Here you can see the partition has 5,242,880 sectors where each sector is 512 bytes. That's mean 5,242,880 * 512 / 1024 / 1024 / 1024 = 2.5GB

Note: It is 2.5GB because ESXi is installed on 4GB USB. If you have regular hard drive core dump partition should be 4 GB.

BUT all the above information is not valid if you have changed your Scratch Location (here is VMware KB how to do it). If your Scratch Location is changed you can display the current scratch location which is stored on /etc/vmware/locker.conf
 [root@esx01:~] cat /etc/vmware/locker.conf  
 /vmfs/volumes/02c3c6c5-53c72a35/scratch/esx01.home.uw.cz 0  
and you can list sub directories in your custom scratch location
 [root@esx01:~] ls -la /vmfs/volumes/02c3c6c5-53c72a35/scratch/esx01.home.uw.cz  
 total 28  
 d---------  7 root   root     4096 May 12 21:45 .  
 d---------  4 root   root     4096 May 3 20:47 ..  
 d---------  2 root   root     4096 May 3 21:17 core  
 d---------  2 root   root     4096 May 3 21:17 downloads  
 d---------  2 root   root     4096 May 28 09:30 log  
 d---------  3 root   root     4096 May 3 21:17 var  
 d---------  2 root   root     4096 May 12 21:45 vsantraces  

Please note that the new scratch location contains the custom core dump subdirectory (core) and also log subdirectory (log).  

Other considerations
I usually change ESXi coredump partition and log directory location to shared datastore. This is done by following ESXi host advanced settings fully described in this VMware KB:
  • CORE DUMP Location: ScratchConfig.ConfiguredScratchLocation
  • Log Location: Syslog.global.logDir and optionally Syslog.global.logDirUnique if you want to redirect all ESXi hosts to the same directory
I also recommend sending logs to the remote syslog server over the network which is done with an advanced setting 
  • Remote Syslog Server(s): Syslog.global.logHost
ESXi core dumps can also be transferred over to the network to the central Core Dump Server. It has to be configured with the following esxcli commands.
 esxcli system coredump network set --interface-name vmk0 --server-ipv4 [Core_Dump_Server_IP] --server-port 6500  
 esxcli system coredump network set --enable true  
 esxcli system coredump network check  

Wednesday, May 06, 2015

DELL Force10 VLT and vSphere Networking

DELL Force10 VLT is multi chassis LAG technology. I wrote several blog posts about VLT so for VLT introduction look at http://blog.igics.com/2014/05/dell-force10-vlt-virtual-link-trunking.html. All Force10 related posts are listed here.  By the way DELL Force10 S-Series switches has been renamed to DELL S-Series switches with DNOS 9 (DNOS stands for DELL Network Operating System) however I’ll keep using Force10 and FTOS in my series to keep it uniform. 

In this blog post I would like to discuss Force10 VLT specific failure scenario when VLTi fails.

VLT Domain is actually cluster of two VLT nodes (peers). One node is configured as primary and second node as secondary. VLTi is a peer link between two VLT nodes. The main role of VLTi peer link is to synchronize MAC addresses interface assignments which is used for optimal traffic in VLT port-channels. In other words if everything is up and running data traffic over VLT port-channels (virtual LAGs)  is optimize and optimal link will be chosen to eliminate inter VLTi traffic. VLTi is used for data traffic only in case of some VLT link failure in one node and another VLT link still available on another node.

Now you can ask what happen in case of VLTi failure. In this situation backup link will kick in and act as a backup communication link for VLT Domain cluster. This situation is called Split-Brain scenario and exact behavior is nicely described in VLT Reference guide.
The backup heartbeat messages are exchanged between the VLT peers through the backup links of the OOB Management network. When the VLTI link (port-channel) fails, the MAC/ARP entries cannot be synchronized between the VLT peers through the failed VLTI link, hence the Secondary VLT Peer shuts the VLT port-channel forcing the traffic from the ToR switches to flow only through the primary VLT peer to avoid traffic black-hole. Similarly the return traffic on layer-3 also reaches the primary VLT node. This is Split-brain scenario and when the VLTI link is restored, the secondary VLT peer waits for the pre-configured time (delay-restore) for the MAC/ARP tables to synchronize before passing the traffic. In case of both VLTi and backup link failure, both the VLT nodes take primary role and continue to pass the traffic, if the system mac is configured on both the VLT peers. However there would not be MAC/ARP synchronization.
With all that being said let’s look at some typical VLT topologies with VMware ESXi host. Force10 S4810 is L3 switch therefore VLT domain can provide switching and routing services. Upstream router is single router for external connectivity. ESXi host has two physical NIC interfaces.

First topology

First topology is with VMware switch independent connectivity. This is very common and favorite ESXi network connectivity because of simplicity for vSphere administrator.




The problem with this topology is when VLTi peer-link has a failure (red cross in the drawing). We already know that in this scenario the backup link will kick in and VLT links from secondary node are intentionally disabled (black cross in the drawing). However our ESXi host is not connected via VLT therefore the server facing port will stay up.  VLT Domain doesn’t know anything about VMware vSwitch topology therefore it must keep port up which implies as a black hole scenario (black circle in the drawing) for virtual machines pinned into VMware vSwitch Uplink 2.
I hear you. You ask what the solution for this problem is.  I think there are two solutions.  First out-of-the-box solution is to use VLT down to the ESXi host which is depicted on second topology later in this post. Second solution could be to leverage UFD (Uplink Failure Detection) and track some VLT ports together with server facing ports. I did not test this scenario but I think it should work and there is big probability I’ll have to test it soon.   

Second topology

Second topology is leveraging VMware LACP. LACP connectivity is obviously more VLT friendly because VLT is established down to the server and downlink to ESXi host is correctly disabled. Virtual machines are not pinned directly to VMware vSwitch uplinks but they are connected through LACP virtual interface. That’s the reason you will not experience black hole scenario for some virtual machines.







Conclusion

Server virtualization is nowadays on every modern datacenter. That’s the reason why virtual networking has to be taken in to account for any datacenter network design. VMware switch independent NIC teaming is simple for vSphere administrator but it can negatively impact network availability in some scenarios. Unfortunately VMware standard virtual switch doesn’t support dynamic port-channel (LACP) but only static port-channel. Static port-channel should work correctly with VLT but LACP is recommended because of LACP keep-alive mechanism.  LACP is available only with VMware distributed virtual switch which requires the highest VMware licenses (vSphere Enteprise Plus edition). VMware’s distributed virtual switch with LACP uplink is the best solution for Force10 VLT.  In case of the budget or technical constraint you have to design an alternative solution leveraging either static port-channel (VMware call it “IP Hash load balancing”) or FTOS UFD (Uplink Failure Detection) to mitigate risk of black hole scenario. 

Update 2015-05-13:
I have just realized that NPAR is actually technical constraint avoiding to use port-channel technology on ESXi host virtual switch. NPAR technology allows switch independent network partitioning of physical NIC ports into more logical NICs. However port-channel cannot be configured on NPAR enabled NICs therefore UFD is probably the only solution to avoid black hole scenario when VLT peer-link fails. 

CISCO UCS Product Poster

Here is nice poster depicting CISCO Unified Compute System components.

Wednesday, April 08, 2015

Force10 Link Dampening

First of all let's explain why we should use Link Dampening?

Interface state changes occur when interfaces are administratively brought up or down or if an interface state changes. Every time an interface changes a state or flaps, routing protocols are notified of the status of the routes that are affected by the change in state. These protocols go through the momentous task of re-converging. Flapping; therefore, puts the status of entire network at risk of transient loops and black holes. Link dampening minimizes the risk created by flapping by imposing a penalty for each interface flap and decaying the penalty exponentially. After the penalty exceeds a certain threshold, the interface is put in an Error-Disabled state and for all practical purposes of routing, the interface is deemed to be “down.” After the interface becomes stable and the penalty decays below a certain threshold, the interface comes up again and the routing protocols re-converge.

Dampening parameters:
Syntax: dampening [[[[half-life] [reuse-threshold]] [suppress-threshold]] [max-suppress-time]]
·         half-life
o    The number of seconds after which the penalty is decreased. The penalty decreases half after the half-life period expires. The range is from 1 to 30 seconds. The default is 5 seconds.
·         reuse-threshold
o    The number as the reuse threshold, the penalty value below which the interface state is changed to “up”. The range is from 1 to 20000. The default is 750.
·         suppress-threshold
o    The number as the suppress threshold, the penalty value above which the interface state is changed to “error disabled”. The range is from 1 to 20000. The default is 2500.
·         max-suppress-time
o    The maximum number for which a route can be suppressed. The default is four times the half-life value. The range is from 1 to 86400. The default is 20 seconds.

Dampening algorithm:
With each flap, Dell Networking OS penalizes the interface by assigning a penalty (1024) that decays exponentially depending on the configured half-life. After the accumulated penalty exceeds the suppress threshold value, the interface moves to the Error-Disabled state. This interface state is deemed as “down” by all static/dynamic Layer 2 and Layer 3 protocols. The penalty is exponentially decayed based on the half-life timer. After the penalty decays below the reuse threshold, the interface enables.

Dampening settings timing example: 
Lets say we have dampening 10 100 1000 60
·         half-life = 10 seconds
·         reuse-threshold = 10
·         suppress-threshold = 1000
·         max-suppress-time = 60 second
Time after flap
Penalty
Port state
Comment
0s
1024
Down
Penalty set to 1024
Penalty (1024) > Supress-threshold (1000)  then port state down
10s
512
Down
Penalty set to 1024 / 2
Penalty (512) > Reuse-threshold (100) then port state still down
20s
256
Down
Penalty set to 512 / 2
Penalty (256) > Reuse-threshold (100) then port state still down
30s
128
Down
Penalty set to 256 / 2
Penalty (256) > Reuse-threshold (100) then port state still down
40s
64
Up
Penalty set to 128 / 2
Penalty (64) < Reuse-threshold (100) then port state is changed to UP



Saturday, April 04, 2015

ESXi root password complexity

Warning: This is just for lab experimenting and not for production use. 

When experimenting with ESXi in the lab sometimes you have to reset ESXi to default settings. After "Reset System Configuration"from DCUI your password is removed and you have to set the new one.  I prefer to have simple root password in the lab. However ESXi requires pretty strength password complexity and below is procedure how to decrease it.

1/ Login to ESXi shell console.

2/ Edit /etc/pam.d/passwd  (vi /etc/pam.d/passwd)
By default password complexity is set like that
password     requisite    /lib/security/$ISA/pam_passwdqc.so retry=3 min=disabled, disabled,disabled,7,7

3/ Change password requisite to
password     requisite    /lib/security/$ISA/pam_passwdqc.so retry=3 min=disabled, disabled,disabled,1,1

4/ Change root password by command passwd

For  more information look at vSphere documentation.

Tuesday, March 31, 2015

VCDX Application submitted - time for mock defenses

I have just submitted my VCDX application for June defense in Frimley, UK. I assume all my readers know what VCDX stands for. For those who don't look at VCDX.vmware.com for further details. I don't want to write about VCDX defense process, preparation, etc. because there are lot of other blog posts and resources available on the internet.

I think that VCDX is about continuous lifelong learning at home and practicing in the field. However I believe that learning must be significantly boosted before the defense because in VCDX panel are sitting the most skilled vSphere architects on this planet. Therefore your success probability increases when you are prepared for any question regarding your design.

Preparing together is better. That's the reason I'm looking for other VCDX candidates already submitted VCDX applications and targeting July defense. I would be more then happy to organize study sessions or mock defenses over the webex.

Below are session times best suiting me. However, if you prefer another time just write a comment or send a tweet to @david_pasek and I can arrange another sessions.

All times are in Central European Time (GMT+2). If you want to register send a tweet to @david_pasek or post a comment with date(s) you are planning to attend.


Session time
Location & Topic
Attendees

April 06, Mon 
9pm – 11pm
Location: webex TBD
Topic: TBD
David Pasek (O)
S
April 13, Mon
9pm – 11pm
Location: webex link
Topic: Mock defense
David Pasek (O)
Olivier B (A,P)
S
April 20, Mon
9pm – 11pm
Location: webex link
Topic: Mock defense
David Pasek (O,G)
Olivier B (A,P)
@nickbowienz(A,P)
Shady Ali (A)
Kiran Reid (A)
S
April 27 Mon 
9pm – 11pm
Location: webex link
Topic: Larus's Mock defense
David Pasek (O,P)
Larus Hjartarson(G)
Simon H. (P)
S
May 04 Mon 
9pm – 11pm
Location: webex link
Topic: Simon's Mock defense
David Pasek (O,P)
Larus Hjartarson(P)
Simon H. (G)
S
May 11, Mon
9pm – 11pm
Location: webex link
Topic: David's Mock defense
David Pasek (O,G)
Larus Hjartarson(P)
Simon H. (P)
S
May 18, Mon
9pm – 11pm
Location: webex link
Topic: Larus's Mock defense
David Pasek (O,P)
Larus Hjartarson(G)
Simon H. (P)
S
May 25, Mon
9pm – 11pm
Location: webex link
Topic: Simon's Mock defense
David Pasek (O)
Larus Hjartarson(P)
Simon H. (G)

June 01, Mon
 9pm – 11pm
Location: webex TBD
Topic: TBD







Legend:
S - Session scheduled
(O) - Organizer
(A) - Attendee
(P) - Panelist
(G) - VCDX candidate to be grilled :-)

Sunday, March 15, 2015

DELL Force10 : mVLT – Ethernet Loop Free Topology Design

Last week I have received following question from one of my reader …
I came to your blog post http://blog.igics.com/2014/05/dell-force10-vlt-virtual-link-trunking.html and I am really happy that you shared this information with us. However I was wondering if you have tested a scenario with 4 S4810 with VLT configured on 2 x 2 and connected together (somewhere called mLAG). How do you continue to add VLT couples to the setup? I would be really happy if you could provide any info regarding such setup.
So let’s deep dive into VLT port-channel between two Force10 VLT Domains also known as mVLT. Please note that VLT can be configured not only between two Force10 VLT domains but also between Force10 VLT domain and other multi chassis port-channel technology like for instance CISCO virtual Port Channel (vPC). However, this blog post is focused to single vendor solution mVLT on DELL S-Series Switches (previously known as Force10 S-Series).

If you are not familiar with DELL Force10 VLT technology read my previous blog post where is VLT described in detail. It is really important to understand VLT before you try to understand mVLT (Multi-domain VLT). By the way mVLT is called eVLT (Enhanced VLT) in Force10 documentation so it might be little bit confusing. Anyway mVLT is nothing else then regular virtual port channel (VLT) between  two VLT domains. Therefore mVLT is quite good term if you ask me.

mVLT Logical Design
mVLT logical design is pretty straight forward. It is required to achieve stretched L2 over two datacenters without any loops. This topology is often called loop free topology and it is depicted on figure below from spanning tree (STP) point of view.


However we would like to have hardware and link redundancy therefore multi chassis port-channel technology (Force10 VLT in our particular case) is used to still have simple loop free topology from spanning tree point of view but with switch unit and physical link redundancy. Force10 mVLT solution is logically depicted on figure below.


Please note, that each single VLT Domain act in spanning tree as a single logical switch.

DELL highly recommends using four links between VLT domains because of higher redundancy and optimal data flow. However, sometimes your are constraint with links between sites. Two links DCI is also supported design but not recommended because there is obviously lower link redundancy and therefore higher probability of communication over VLTi which adds hop and therefore latency. Two links mVLT DCI also known as square design is depicted on figure below. 


Even the topology is loop free and from logical view we have just one switch on each datacenter spanning tree protocol should be enabled and configured just in case of human error or VLT domain failure or split. Rapid Spanning Tree (RSTP) protocol is good enough therefore used later in physical configurations.

mVLT Physical Design and Configuration
Physical design below shows connectivity of four (2x two) Force10 S4810 switches leveraging four links for DCI port-channel (mVLT).


Physical design for just two links DCI is depicted on following schema.


And switch configuration snippets for four links mVLT are listed below for completeness. Two link DCI is just variation of similar configurations so you can simply reuse and slightly change four link configuration.

DCA-SWCORE-A – acts as primary Root Bridge in RSTP in case of loop
!
hostname DCA-SWCORE-A
!
protocol spanning-tree rstp
 no disable
 hello-time 1
 max-age 6
 forward-delay 4
 bridge-priority 4096
!
vlt domain 1
 peer-link port-channel 128
 back-up destination 172.16.201.2
 primary-priority 1
 system-mac mac-address 02:00:00:00:00:01
 unit-id 0
 peer-routing
!
 proxy-gateway lldp
  peer-domain-link port-channel 127
!
interface TenGigabitEthernet 0/46
 no ip address
 mtu 12000
 port-channel-protocol LACP
  port-channel 127 mode active
 dampening 10 100 1000 60
 no shutdown
!
interface TenGigabitEthernet 0/47
 no ip address
 mtu 12000
 port-channel-protocol LACP
  port-channel 127 mode active
 dampening 10 100 1000 60
 no shutdown
!
interface fortyGigE 0/56
 no ip address
 mtu 12000
 no shutdown
!
interface fortyGigE 0/60
 no ip address
 mtu 12000
 no shutdown
!
interface ManagementEthernet 0/0
 ip address 172.16.201.1/24
 no shutdown
!
interface Port-channel 127
 description "mVLT - interconnect link"
 no ip address
 mtu 12000
 switchport
 vlt-peer-lag port-channel 127
 no shutdown
!
interface Port-channel 128
 description "VLTi - interconnect link"
 no ip address
 mtu 12000
 channel-member fortyGigE 0/56,60
 no shutdown
!

DCA-SWCORE-B  – acts as secondary Root Bridge in RSTP in case of loop
!
hostname DCA-SWCORE-B
!
protocol spanning-tree rstp
 no disable
 hello-time 1
 max-age 6
 forward-delay 4
 bridge-priority 8192
!
vlt domain 1
 peer-link port-channel 128
 back-up destination 172.16.201.1
 primary-priority 8192
 system-mac mac-address 02:00:00:00:00:01
 unit-id 1
 peer-routing
!
 proxy-gateway lldp
  peer-domain-link port-channel 127
!
interface TenGigabitEthernet 0/46
 no ip address
 mtu 12000
 port-channel-protocol LACP
  port-channel 127 mode active
 dampening 10 100 1000 60
 no shutdown
!
interface TenGigabitEthernet 0/47
 no ip address
 mtu 12000
 port-channel-protocol LACP
  port-channel 127 mode active
 dampening 10 100 1000 60
 no shutdown
!
interface fortyGigE 0/56
 no ip address
 mtu 12000
 no shutdown
!
interface fortyGigE 0/60
 no ip address
 mtu 12000
 no shutdown
!
interface ManagementEthernet 0/0
 ip address 172.16.201.2/24
 no shutdown
!
interface Port-channel 127
 description "mVLT - interconnect link"
 no ip address
 mtu 12000
 switchport
 vlt-peer-lag port-channel 127
 no shutdown
!
interface Port-channel 128
 description "VLTi - interconnect link"
 no ip address
 mtu 12000
 channel-member fortyGigE 0/56,60
 no shutdown
!
DCB-SWCORE-A – acts as tertiary Root Bridge in RSTP in case of loop
!
hostname DCB-SWCORE-A
!
protocol spanning-tree rstp
 no disable
 hello-time 1
 max-age 6
 forward-delay 4
 bridge-priority 12288
!
vlt domain 2
 peer-link port-channel 128
 back-up destination 172.16.202.2
 primary-priority 1
 system-mac mac-address 02:00:00:00:00:02
 unit-id 0
 peer-routing
!
 proxy-gateway lldp
  peer-domain-link port-channel 127
!
interface TenGigabitEthernet 0/46
 no ip address
 mtu 12000
 port-channel-protocol LACP
  port-channel 127 mode active
 dampening 10 100 1000 60
 no shutdown
!
interface TenGigabitEthernet 0/47
 no ip address
 mtu 12000
 port-channel-protocol LACP
  port-channel 127 mode active
 dampening 10 100 1000 60
 no shutdown
!
interface fortyGigE 0/56
 no ip address
 mtu 12000
 no shutdown
!
interface fortyGigE 0/60
 no ip address
 mtu 12000
 no shutdown
!
interface ManagementEthernet 0/0
 ip address 172.16.202.1/24
 no shutdown
!
interface Port-channel 127
 description "mVLT - interconnect link"
 no ip address
 mtu 12000
 switchport
 vlt-peer-lag port-channel 127
 no shutdown
!
interface Port-channel 128
 description "VLTi - interconnect link"
 no ip address
 mtu 12000
 channel-member fortyGigE 0/56,60
 no shutdown
!

DCB-SWCORE-B – acts as quaternary Root Bridge in RSTP in case of loop
!
hostname DCB-SWCORE-B
!
protocol spanning-tree rstp
 no disable
 hello-time 1
 max-age 6
 forward-delay 4
 bridge-priority 16384
!
vlt domain 2
 peer-link port-channel 128
 back-up destination 172.16.202.1
 primary-priority 8192
 system-mac mac-address 02:00:00:00:00:02
 unit-id 1
 peer-routing
!
 proxy-gateway lldp
  peer-domain-link port-channel 127
!
interface TenGigabitEthernet 0/46
 no ip address
 mtu 12000
 port-channel-protocol LACP
  port-channel 127 mode active
 dampening 10 100 1000 60
 no shutdown
!
interface TenGigabitEthernet 0/47
 no ip address
 mtu 12000
 port-channel-protocol LACP
  port-channel 127 mode active
 dampening 10 100 1000 60
 no shutdown
!
interface fortyGigE 0/56
 no ip address
 mtu 12000
 no shutdown
!
interface fortyGigE 0/60
 no ip address
 mtu 12000
 no shutdown
!
interface ManagementEthernet 0/0
 ip address 172.16.202.2/24
 no shutdown
!
interface Port-channel 127
 description "mVLT - interconnect link"
 no ip address
 mtu 12000
 switchport
 vlt-peer-lag port-channel 127
 no shutdown
!
interface Port-channel 128
 description "VLTi - interconnect link"
 no ip address
 mtu 12000
 channel-member fortyGigE 0/56,60
 no shutdown
!

Conclusion

Force10 mVLT is great technology for loop free L2 network topology. It can be leveraged for local loop free topologies inside single datacenter or as L2 extension between datacenters. However our networks are usually built to support IP traffic therefore L3 considerations has to be addressed as well. Just think about default IP gateway behavior and potential DCI potential trombone.  That’s where other VLT features peer-routing and proxy-gateway come in to play and mitigate DCI trombone issue. You can see these technologies configured in VLT configurations above. But that’s another topic for another blog post.

To be absolutely honest I personally don't recommend L2 interconnects between datacenters without any good justification. I strongly recommend L3 datacenter interconnects and when stretched L2 is needed then some network overlay technology can be leveraged. L3 will guarantee independent availability zones and splitting L2 failure domain. But on the other hand such network overlay needs some other bits and pieces which in some cases increase complexity and cost. Therefore mVLT can be seriously considered for cost effective datacenter L2 extensions.  That's a typical "it depends" scenario where these two design decision options has to be compared and final decision clearly justified.   

If you want to know more about these technologies or use cases just ask and we can go deeper or broader. And as always any feedback and/or comment is highly appreciated.