简体   繁体   中英

puppet with multiple NFS mount on same server

I have few NFS mount points on the same server but different directories. ex:

    x.x.x.x:/stats   /data/stats
    x.x.x.x:/scratch   /data/scratch
    x.x.x.x:/ops   /data/ops    

But when i try to run puppet it adds following to my fstab. (wrong mount assignment)

x.x.x.x:/scratch   /data/stats       nfs     defaults,nodev,nosharecache     0       0
x.x.x.x:/scratch   /data/ops  nfs     defaults,nodev,nosharecache     0       0
x.x.x.x:/scratch   /data/scratch     nfs     defaults,nodev,nosharecache     0       0

It is using the last mount option on all mounted partitions. so i did a little bit of research and found the following bug.

 https://tickets.puppetlabs.com/browse/DOCUMENT-242

Then added nosharecache option, but still no luck. this is my puppet code

class profile::mounts::stats {
  # Hiera lookups
  $location = hiera('profile::mounts::stats::location')
  $location2 = hiera('profile::mounts::stats::location2')
   tag        'new_mount'

 file { '/data/stats':
ensure  => directory,
owner   => 'root',
group   => 'root',
mode    => '0755',
require => File['/data'],
tag     => 'new_mount',
}

  mount { '/data/stats':
ensure  => mounted,
fstype  => 'nfs',
device  => $location,
options => 'defaults,nodev,nosharecache',
require => File['/data/stats'],
tag     => 'new_mount'
}



file { '/data/ops':
  ensure  => directory,
  owner   => 'root',
  group   => 'mail',
  mode    => '0775',
  require => File['/data'],
  tag     => 'new_mount',
}

 mount { '/data/ops':
ensure  => mounted,
fstype  => 'nfs',
device  => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/ops'],
tag     => 'new_mount',
}

file { '/data/scratch':
ensure  => directory,
owner   => 'root',
group   => 'mail',
mode    => '0775',
require => File['/data'],
tag     => 'new_mount',
}

 mount { '/data/scratch':
ensure  => mounted,
fstype  => 'nfs',
device  => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/scratch'],
tag     => 'new_mount',
}

 }

 }

My hieara lookup is as follows

profile::mounts::stats::location: x.x.x.x:/stats
profile::mounts::stats::location2: x.x.x.x:/scratch

why it is causing some unexpected behavior ?

I compiled that code and I see a few issues:

You did not include the File['/data'] resource, but I assume you have that somewhere else?

After compiling I see this in the catalog:

$ cat myclass.json | jq '.resources | .[] | select(.type == "Mount") | [.title, .parameters]'
[
  "/data/stats",
  {
    "ensure": "mounted",
    "fstype": "nfs",
    "device": "x.x.x.x:/stats",
    "options": "defaults,nodev,nosharecache",
    "require": "File[/data/stats]",
    "tag": "new_mount"
  }
]
[
  "/data/ops",
  {
    "ensure": "mounted",
    "fstype": "nfs",
    "device": "x.x.x.x:/scratch",
    "options": "defaults,nodev,nosharecache",
    "require": "File[/data/ops]",
    "tag": "new_mount"
  }
]
[
  "/data/scratch",
  {
    "ensure": "mounted",
    "fstype": "nfs",
    "device": "x.x.x.x:/scratch",
    "options": "defaults,nodev,nosharecache",
    "require": "File[/data/scratch]",
    "tag": "new_mount"
  }
]

So you are mounting both /data/ops and /data/scratch on $location2 . Is that an oversight? It does not match what you said you were trying to achieve.

Otherwise I can't reproduce what you said you are observing.

Is anything other than Puppet editing the fstab file? Did you try this code on a fresh box?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM