This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Technology

My First ComputerI guess one can say my fascination with technology began with my toys as a child. My parents were frustrated giving me toys and I would take them (rather tear them up) a part to see how they ticked. As a teenage, I developed interest in electronics constructing a simple tone generating circuit that I made into a monophonic musical keyboard. This was in the era before the small computer.

I enjoy the benefits that technology brings to our lives. More so, I enjoy understanding much of the design and how it is utilized as well as its limits. Here are some of the areas of technology that I have worked with both at home and in the enterprise space. I hope that you may find a bit of knowledge from my own experience in these articles.

1 - Open Systems

Per Wikipedia, “Open Systems are computer systems that provide some combination of interoperability, portability, and open software standards. (It can also refer to specific installations that are configured to allow unrestricted access by people and/or other computers; this article does not discuss that meaning).” On these pages, these are articles that relate to Open Systems and Data Storage, particularly encompassing UNIX and Linux.

Introduction

An Old Computer

My hardware experience is predominantly on SPARC and Intel hardware platforms though I’ve had light experience with PA-RISC and HP’s Power chipset. My greatest accomplishment here was to provide a configuration framework that provided consistency in a shared filesystem that had the same look and feel across platforms but managed executable binaries per platform. The user profile also played into this framework in that platform oddities were compensated in the enterprise-wide shared profile but featured the ability for the end user to customize their environment.

My practical OS experience is on Solaris (1.4.x to 2.10) and Linux/Redhat (Enterprise 3 through 6 (professionally), Fedora Core through current (at the moment - Fedora 33) at home. On the Linux front, I’ve used at various times other distros such as Ubuntu, SuSE, Linux Mint, Linux Cinnamon.

I have recently worked with Raspberry Pi as a mini server to see how viable that platform is for the comparitively cheap price fulfilling low performing system needs.

1.1 - Architecture

When I was in junior high, all the students were given some sort of IQ test. The counselor told me that my results indicated that I was a 3 dimensional thinker. That certainly flattered me. To work with any type of architecture, requires 3 dimensional thinking. I enjoy most doing architectural work. To design various interelated components, both hardware and software, to work together to service a client community is akin to a conductor directing an orchestra. This section contains articles on various subjects related to computing architecture.

Architecture

1.1.1 - Primal Philosophy

Foundational thoughts on creating a computing architecture in a medium to large enterprise environment.

Introduction

NAS Storage RackBack in the glory days of Sun Microsystem, they were visionary at that time when the Ethernet and TCP/IP was emerging as the universal network topology and protocol standard. Sun adopted the marketing slogan "the network is the computer" and wisely so. That is the way I have viewed computing from the time I started architecting networks of computers. It isn't about a standalone machine performing a specific localized task only, rather it is a cooperative service that ultimately satisfies a human need as it relates to a service and its related data.

Through the years, I have been successful in tailoring an architecture that required few administrators to efficiently administrate hundreds of computers running on multiple hardware and OS platforms serving both high-end technical, business end user communities, desktop and server incorporating multiple platforms. I have found three areas that require a standard for administration (1) OS configuration (2) separate off data onto devices that are built for that purpose of managing data (i.e. NAS), developing a taxonomy that support its usage and (3) network topology.

OS and Software Management

OS ConfigurationNinety five percent of the OS installation should be normalized into a standard set of configuration files that can be incorporated into a provisioning system such as Red Hat Satellite for Linux. Separating off the data onto specific data appliances frees up backups being performed on fewer machines and do not require kernel tweaks that satisfies both the application service running on the server versus handling backups. Considering that application software doesn't rely on a local registery as does MS Windows, the application software itself can be delivered to multiple hosts from a network share, thus making any given host more fault tolerant.

The argument for “installation per host” is that if there is an issue with the installation on the network share, all hosts suffer. This is a bit of a fallacy. While it is true that if there is an issue, it breaks everywhere, but then again, you fix an issue in one place, you fix it everywhere. The ability to extend the enterprise-wide installation with minimal effort, maximizing your ability to administrate it outweighs the negative for breaking it everywhere. It takes discipline to methodically maintain a centralized software installation.

Data Management

Data ManagementData should be stored on NAS (network attached storage) appliances as they are suited toward optimal data delivery across a network and gives a central source for managing it. Anymore, most data is delivered across a network. NAS appliances are commonly used (such as Netapp) to deliver a "data share" using SMB, NFS or SAN over FCoE protocols.

In the 1990s, the argument against using an ethernet network for delivering data was due to bandwidth and fear for what would happen if the network goes down. Even back then, if you lost the network, the network was held together by backend services for identity management and DNS. In the 21st century, I always chose to install two separate network cards (at least 2 ports each) in each server. I configured at least one port per card together for a trunked pair. One pair would service a data network and the other for frontend/user access. This worked well over the years. Virtualizing/trunking multiple network cards provide a fault tolerant interface whether for user or data access, though I have never seen a network card go bad.

There are a handful of application software that requires SAN storage, though I would avoid SAN unless absolutely required. You are limited by the filesystem laid on the SAN volume and probably have to offload management of the data from the appliance serving the volume. Netapp has a good article on SAN vs. NAS.

Business Continuance/Disaster Recovery and Virtualization

Disaster RecoveryThere is the subject of business continuance and disaster recovery that plays into this equation. Network virtualization is a term that includes network switches, network adapters, load balancers, virtual LAN, virtual machines and remote access solutions. Virtualization is key toward providing support for fault tolerance inside a given data center as well as key toward providing effective disaster recovery. Virtualization across data centers simplify recovery when a single data center fails. All this requires planning, providing replication of data and procedures (automated or not) to swing a service across data centers. Cloud services provide a fault tolerant service delivery as a base offering.

The use of virtual machines is common place these days. I’ve been amused in the past at the administrative practice by Windows administrators who would deploy really small servers that only provided a single service. When they discovered virtualization, they adhered to the same paradigm, but providing a single service per virtual machine. Working with “the big iron”, services would be served off of a single server instance where those services required roughly the same tuning requirements and utilization and performance was monitored. With good configuration management, extending capacity was fairly simple.

Work has been done to virtualize the network topology so that you can deploy hosts on the same network world-wide. For me, this is nirvana for supporting a disaster recovery plan, since a service or virtual host can be moved across to another host, no matter the physical network the hypervisor attached without having to reconfigure its network configuration and host naming service entry.

Virtual networks (e.g. Cisco Easy Virtual Network - a level 3 virtualization) provides the abstraction layer where network segmentation can go wide, meaning span multiple physical networks providing larger network segments across data centers. Having a “super network”, disaster recovery becomes much simpler as the IP address doesn’t have to be reconfigured and reported to related services such as DNS is needed.

Cloud Computing

Cloud Computing My last job as a systems architect, I had the vision for creating a private cloud, with the goal for moving most hypervisors and virtual machines into a private cloud. Whether administering a private or public cloud, one needs a toolset for managing a "cloud". The term "cloud" was a favorite buzzword 10 years ago that was not a definitive term. For IT management it usually meant something like "I have a problem that would be easier to shove into the cloud and thus solve the problem". (Sounded like the out sourcing initiatives in the 1990s). Any problem that exists doesn't go away. If anything the ability to manage a network just became more complicated.

There has been various proprietary software solutions that allows the administrator to address part of what is involved with managing a cloud whether for standing up a virtual host or possibly carving out data space or configure the network. OpenStack looks to be hardware and OS agnostic for managing a private and public cloud environment. I have no experience here, but looks to be a solution that the hardware manufacturers and OS developers have built plugins as well as integration with the major public cloud providers.

Having experience working in a IaaS/SaaS solution, utilizing a public cloud is only effective with small data. Before initiating a public cloud contract, work out an exit plan. If you have a large amount of data to move, you likely will not be able to push it across the wire. There needs to be a plan in place, possibly a contractural term to being able to physically retrieve the data. Most companies are anxious to enter into a cloud arrangement but have not planned for when they wish to exit.

Enterprise-Wide Management

Enterprise-Wide Management There is the old adage that two things are certain in life - death and taxes. Where humans have made something, whether physical or abstract, one thing is certain - it is not perfect and will likely fail at some time in the future. Network monitoring is required so the administrators know when a system has failed. Stages for implementation should include server up/down monitoring followed by work with adopting algorythms for detecting when a service is no longer available. From there, performance metrics can then be collected and work to aggregate those metrics and threshholds into a form that provides support for capacity planning and measure whether critical success factors are met or not.

Another thought toward capacity management. Depending on the criticality of the service offering, the environment should provide for test/dev versus production environments. Some services under continual (e.g. waterfall) development could require separating out test and dev environments in order to stage for a production push.

Provisioning tools are needed to perform quick, consistent installations whether loading an OS or enabling a software service. At a minimum, shell scripts are needed to perform the low-level configuration. At a higher level, software frameworks like OpenStack and Red Hat Satellite are needed to manage a server farm for more than a handful of servers.

Remote Access

Remote AccessRemote access has been around in various forms for the past 20+ years and is becoming a critical function today. VPN (virtual private network) is the term associated with providing secure packet transmission over the extranet. While a secure transport is needed, outside of public cloud services, there is the need for an edge service that provides the corporate user environment "as if" they were inside the office.

Having worked in a company that had high-end graphical workstations used by technical users requiring graphics virtualization and high data performance, we worked with a couple solutions that delivered a remote desktop. NoMachine worked well but we migrated toward Nice Software (now an Amazon Web Service company). At the time we were looking at not only providing a remote access solution, but also a replacement for the expensive desktop workstation while providing larger pipes in the data center to the data farm. Nice was advantageous for the end user in that would start an interactive process on the graphics server farm as a remote application from their desk, suspend the session while their process ran, and remotely connect again to check on that process from home.

Summary

When correctly architected, you create a network of computers that are consistently deployed and easily recreated should the need arise. More importantly, in managing multiple administrators, where a defined architecture exists and understood and supported by all, the efficiency gained allows the admin to work beyond the daily issues due to inconsistent deployment, promotes positive team dynamics and minimizes tribal knowledge.

1.1.2 - Network Based Administration

This section provides thoughts over the basics in designing a network based computing environment that require the fewest number of administrators to manage it.

Configuration Management

Configuration design and definition is at the core of good network architecture. I have experimented with what configuration elements are important, which should be shared and which should be maintained locally on each host. Whether an instance is virtual, physical or a container, these concepts apply universally.

Traditionally, there was a lot of apprehension to share both application and configuration over a network much less share application accessible data. I guess this comes from either people who cannot think 3 dimensionally or those whose background is solely administrating a Windows network of which the design has morphed from a limited stand-alone host architecture. Realistically today, if there was no network, we’d not be able to do much anyway. Developing a sustainable architecture surrounding the UNIX/Linux network is efficient and manageable. Managing security is a separate topic for discussion.

Identity Management and Unified Information Reference Services

The first step in managing a network of open system computers is to establish a federated name service with the purpose of managing user accounts and groups as well as provide a common reference repository for other information. I have leveraged NIS, NIS+ and LDAP as name service through the years. I favor LDAP since the directory server provides a better system for redundancy and service delivery, particularly on a global network. MS Windows Active Directory can be made to work on UNIX/Linux hosts by enabling SSL, make some security rule changes and adding the schema supporting open systems. The downside to Active Directory compared to a Netscape based directory service is managing the schema. On Active Directory, once the schema has been extended, you cannot rescind the schema unless you rebuild the entire installation from scratch. To date, I have yet to find another standardized directory service that will accomodate the deviations that Active Directory provide an MS network.

In a shop where there are multiple flavors of open systems, there have been ways that I have leveraged automounter to store binaries that are shared on a given OS platform/version. Leveraging on NAS storage such as Netapp, replication can be performed across administrative centers that can be used universally and maintained from one host. For the 5 hosts I maintain at home, I have found TruNAS Core (formerly FreeNAS) to be a good opensource solution to deliver shared data to my Linux and OSX hosts.

Common Enterprise-Wide File System Taxonomy

The most cumbersome activity toward setting up a holistic network is deciding on what utilities and software is to be shared across the network from a single source. Depending on the flavor, the path to the binary will be different. Further, the version won’t be consistent between OS versions or platform. Having a common share to provide for scripting languages such as Perl or Python help to provide a single path to reference in scripting, including plugin inclusion. It requires some knowledge on how to compile and install opensource software. More architectural discussion is included in the article User Profile and Environment over how to manage the same look and feel though different over the network.

Along with managing application software across a network, logically the user home directory has to be shared from a NAS. Since the user profile is stored in the home directory, it has to be standardized generically to function on all platforms and possibly versions. Decisions are needed for ordering the PATH and whether structure is needed in the profile to extend functionality to provide for user customizations or local versus global network environments. At a minimum, the stock user profile must be unified so that it can be managed consistently over all the user community, possibly with the exception of application software related administration accounts that are specific to the installation of a single application.

Document, Document, Document!

Lastly, it is important to document architecture and set standards for maintaining a holistic network as well as provide a guide to all administrators that will provide consistency in practice.

These links below provide more detail toward what I have proven in architecting and deployment of a consistent network of open systems.

1.1.2.1 - Federated Name Services - LDAP

Federated name services have evolved through the years. Currently LDAP is the current protocol driven service that has replaced legacy services such as NIS and NIS+. There are many guides over what is LDAP and how to implement LDAP directory services. This article discusses about how to leverage LDAP for access control in a network of open system hosts with multiple user and admin groups in the enterprise.

Introduction

What is a Federated Name Service? In a nutshell, it is a service that is organized by types of reference information such as in a library. There are books in the library of all different types and categories. You select the book off the shelf that best suits your needs and read the information. The “books” in an LDAP directory are small bits of data that is stored in a database on the backend and presented in a categorical/hierarchical form. This data is generally written once and read many times. This article is relative to open systems, I will write another article over managing LDAP services on Microsoft’s Active Directory that can also service open systems.

Design Considerations

Areas for design beyond account and user groups, are for common maps such as host registration supporting Kerberos or for registering MAC addresses that can be used as a reference point for imaging a host and setting the hostname. Another common use is for a central definition of automount maps. Depending on how one prefers to manage the directory tree, organizing separate trees that support administration centers that house shared data made the most sense to me with all accounts and groups stored in a separate, non-location based tree.

A challenge with open systems and LDAP is how to manage who can log in where. For instance, you don’t want end users to log into a server when they only need to consume a port based service delivered externally to that host. Possibly on a server, you may need to see all the users of that community but not allow them to login. This form of “security” can be managed simply by configuring both the LDAP client and the LDAP directory to match on a defined object’s value.

To provide an example, let’s suppose our community of users comprised of Joe, Bob, Mary, and Ellen. Joe is the site administrator and should have universal access to all hosts. Bob is member of the marketing team where Mary is an application administrator for the marketing team and Ellen is a member of the accounting team.

On the LDAP directory, you’ll need to use an existing object class/attribute or define a new one that will be used to give identity to the “access and identity rights”. If you are leveraging off of an existing defined attribute, that attribute has to be defined as a multi-value attribute since one person may need to be given multiple access identities. For the sake of this discussion, let’s say we add a custom class and the attribute “teamIdentity” to handle access and identity matching that is also added to the user objects (user objects can be configured to include multiple object classes as long as they have a common attribute such as cn).

On the client side, you will be creating a configuration to bind and determine which name service maps will be used out of the directory service. As a part of the client configuration, you can create an LDAP filter that is used when client queries the directory service to only return information that passes the filter criteria. So included in mapping what directory server and what the basedn and leaf for making a concise query into the directory should be, you append a filter that designates further filters a single or multiple matches on an attribute/value pair. For user configuration, there are two configuration types to define: user and shadow databases. The “user” is used to configure for “who should be visible as a user” on the host. The “shadow” is used to configure for who can actually login directly to the host. When the NSS service operates locally on the host, its local database cache will contain the match of users according to the common data values matched between the directory server and the client configuration and filter object/attribute/values. The challenge here is more on functional design around what values are created and for what purpose do they serve. Another custom class may be wise to put definition and ultimately control around what attribute values can be added to the user object. Unless you create definition and rules in your provisioning process, any value (intended, typo, etc.) can be entered.

To bring together this example, let’s suppose that this is the directory definition and content around our user community:

User/Object teamIdentity/Value
Joe admin
Bob marketing-user
Mary marketing-sme
Ellen accounting-user
John marketing

Let’s say we have these servers configured as such:

Server (LDAP Client) Purpose Client Filter - Passwd Client Filter - Shadow
host01 Network Monitoring objectclass=posixAccount objectclass=posixAccount,objectclass=mycustom,teamIdentity=admin
host02 Marketing Services objectclass=posixAccount,objectclass=mycustom,teamIdentity=admin,teamIdentity=marketing-user,teamIdentity=marketing-sme objectclass=posixAccount,objectclass=mycustom,teamIdentity=admin,teamIdentity=marketing-sme
host03 Accounting Services objectclass=posixAccount,objectclass=mycustom,teamIdentity=admin,teamIdentity=accounting-user objectclass=posixAccount,objectclass=mycustom,teamIdentity=admin,teamIdentity=accounting-sme

Here is how each user defined in the directory server will be handled on the client host:

Server (LDAP Client) Identifiable as a User Able to Login
host01 Everyone Joe only
host02 Joe, Bob, Mary Joe, Mary
host03 Joe, Ellen Joe

Notice that for host03, there was a “teamIdentity=accounting-sme” defined as part of the filter. Since Ellen exists in the directory service with the attribute “accounting-user” assigned, she will be visible as a user, but not able to login. Conversely, if there was a user in the directory service configured for the “teamIdentity=accounting-sme”, they would not be able to log in since you have to be identifiable as a user before you can authenticate. One last observation, John is configured for “teamIdentity=marketing”. Since that value is not configured in the client filter, John will not be identifiable or able to login to host02.

For more information over the LDIF syntax see Oracle documentation. For more information over client configuration, you’ll have to dig that out of the administration documentation for your particular platform/distro.

1.1.2.2 - User Profile and Environment

This article discusses considerations in designing and configuring a user profile supporting the OS and application environments. There are different aspects to consider in tailoring the user environment to operate holistically in an open systems network. One major architectural difference between open systems and MS Windows is that applications, for the most part are dependent on a local registry database that a packaged application plants its configuration. Historically with traditional UNIX environments, there is only a text file that contains it’s configuration information, whether a set of key/value pairs or a simple shell file assigning values to variables.

Overview

More modern versions of UNIX, including Linux has implemented a packaging system in order to inventory locally installed OS and software components. These systems only store meta data as opposed to being a repository for configuration data that provides structure for the inventory of software installed and related dependencies. Overall, there is no “registry” per se as in a Windows environment where the local registry is required. Execution is solely dependent on a binary being executed through the shell environment. The binary can be stored locally or on a network share and effectively be executable. The argument against this execution architecture is over control and security for applications running off a given host since realistically a user can store an executable in their own home directory and execute it from that personal location. This can be controlled to a certain extent though not completely by restricting access to compilers, the filesystem and means to external storage devices.

Consideration for architecting the overall operating environment can be categorized in these areas:

  • Application or service running on the local host and stored somewhere
  • OS or variants due to the OS layout and installation
  • Aspects directly related to work teams
  • Aspects related to the personal user computing experience.

Each of these areas need to be a part of an overall design for managing the operating environment and user experience working in the operating environment.

Application Environment

The most easily is managing the application environment. The scope is fairly narrow and particular to a single application to execute on a single host. Since there is no common standard around setting the process environment and launching an application, there is a need for the application administrator to establish a standard for managing how the environment is set and launched - i.e. provide a wrapper script around each application, executed from a common script directory. With purchased applications, they may or may not provide context around their own wrapper. Having a common execution point makes it easier to administrate, particularly when there is software integrated with others. I’ve seen some creative techniques where a single wrapper script is used that sources its environment based on an input parameter on the wrapper script. These generally, though logical, become complicated since there are as many variations for handling the launch of an application as there are application developers.

All OS’s have a central shell profile ingrained into the OS itself depending on the shell. I have found that it is best to leave these alone. Any variations that is particular to the OS environment due to non-OS installation on the local host needs to be managed separately and that aspect is factored into the overall user or application execution environment. Another kink with managing a network of varying OS variants is providing a single profile that compensates for the differences between OSs. For example a common command might be located in /usr/bin on one OS variant but exist in /opt/sfw/bin on another. Based on the OS variant, the execution path would need to factor in those aspects that are unique to that variant.

Work teams may have a common set of environment elements that are particular only to their group but should be universal to all members of that team. This is another aspect to factor into the overall profile management.

User Profile and Environment

Finally, the individual user has certain preferences such as aliases they desire to define and use that apply only to themselves. From a user provisioning standpoint, a template is used to create the user oriented profile. The difficulty is in the administration of a network of users who all wind up with their own version of the template first provisioned into their home directory. This complicates desktop support as profiles are corrupted or become stale with the passage of time. I have found it wise to provide a policy surrounding maintaining a pristine copy of the templated profile in the user’s home directory but provide a user exit to source a private profile where they can supplement the execution path or set aliases. A scheduled job can be run to enforce compliance here but only after the policy is adopted and formalized with the user community.

Architecture and Implementation

The best overall architecture that I have wound up with is a layered approach with a set priority that provides for more granular the precedence based on how far down the priority stack of execution. In essence, the lower down the chain, the greater influence that layer has on the final environment going from the macro to the micro. Here are some diagrams to illustrate this approach.

Logical Architecture

Profile Structure

Execution Order

Profile Execution

The profile is first established by the OS defined profile whose file that is sourced is compiled into the shell binary itself. The location of this file varies according to the OS variant and how the shell is configured for compilation. The default user-centric profile is located in the home directory using the same hidden file name across all OS variants. It is the user profile file that is the center for constructing and executing the precedence stack. With each layer, the last profile pragmatically will override the prior layer as indicated in the “Logical” diagram. Generally there is little need for the “Local Host Profile”. It is optional and only needed when on a standardized location on the local host a profile is created (e.g. /usr/local/etc/profile).

See the next article “Homogenized Utility Sets” for more information surrounding the “Global” and “Local” network file locations and their purpose. This will give perspective around these shared filesystems.

1.1.2.3 - Homogenized Utility Sets

An article that talks about utilities that can be shared between all open system variants, the difficulties to watch out for and elements to consider in the design. Ultimately a shared file system layout is needed that presents a single look and feel across multiple platforms but leverages off of a name service and the use of “tokens” embeded in the automount maps to mount platform specific binaries according to the local platform. This article is complimentary to the previous article “User Profile and Environment”. Topics include: Which Shell?, Utilities and Managing Open Source.

Which Shell?

In short, CSH (aka C Shell) and its variants aren’t worth messing with unless absolutely coerced. I have found explainable and repeatable bugs that make no sense with CSH. There is quite a choice for Bourne shell variants. I look for the lowest common equivalent between the OS variants.

KSH (aka Korn Shell) is a likely candidate since it has extended functionality beyond the Bourne Shell, but is difficult to implement since there are several versions across platforms. Those extended features make it difficult to code one shell script to be used across all platforms.

I have found that Bash is the most widely supported at the same major version that can be compatibly used out-of-box across the network. The last thing I would care to do is re-invent the wheel of a basic foundational component of the OS. It is suitable for the default user shell as well as a rich enough function set for shell scripting.

Utilities

Working with more than one OS variant will present issues for providing consistent utilities such as Perl, Python, Sudo, etc. since these essential tools are at various obsolete versions out of box. As well as managing a consistent set of plugin modules can be difficult to maintain (e.g. Perl and Python), especially when loaded on each individual host in the network. I have found it prudent to download the source for these utility software along with desirable modules that provide extended utility and compile them into a shared file system per platform type and version. The rule of thumb here is if all your OS variants sufficiently support an out-of-box version, then use the default; if not, compile it and manage it to provide consistency in your holistic network.

Managing Open Source Code

Granted binary compatibility doesn’t cross OS platform and sometimes does not cross OS version, I have found it is easier to compile and manage my homogeneous utility set on each OS variant and share it transparently across the network leveraging off of the automounter service. First, let’s look at a structure that will support a network share for your homogeneous utility set.

There are binary, configuration and log data on a filesystem to be shared. Below is a diagram for implementing a logical filesystem supporting your homogeneous utility set.

Shared Filesystem

I create the automount map supporting this directory structure with embedded tokens on the “Shared on like OS variant” subdirectories that give identity to the OS variant. The size is fairly small. I simplify by storing all these mounts on the same volume. By doing this, you can replicate between sites, which will yield a consistent deployment as well as provide for your disaster recovery plan. I also provide for a pre-production mount as well. The “Shared on all OS variants” exist on a shared filesystem that is replicated for disaster recovery, but not used at other sites. Below is a sample for structuring the filesystem share.

Shared on All Hosts

Shared Filesystem Layout for All Hosts

Shared on All Like OS Variants

Shared Filesystem Layout on All Hosts of Same Type and Version

Here is a sample indirect automap defining the key value pairs supporting mount point /global that is stored in the “auto.global” map.

Key Value
etc nas001:/vol1/global_common/$ENVN/etc
log nas001:/vol1/global_common/$ENVN/log
bin nas001:/vol2/$OSV/$ENVN/bin
sbin nas001:/vol2/$OSV/$ENVN/sbin
lib nas001:/vol2/$OSV/$ENVN/lib
lib64 nas001:/vol2/$OSV/$ENVN/lib64

Embedded tokens are resolved through the client automounter configuration. For Linux this is done either in the /etc/auto.master file or in the /etc/sysconfig/autofs (RedHat). This is a sample for configuring in the /etc/auto.master configuration file.

   /global    -DOSV=rhel5,-DENVN=prod   auto.global

This is what would be added to /etc/sysconfig/autofs configuration file. Note that this affects all entries where /etc/auto.master affects only the single map referenced.

2 - Software Languages

I developed a passion for software development in college. Transitioning from being an application developer into an OS administrator, I particularly enjoyed using simple programming tools to perform repetitive functions or easily simplify what would alternatively be a series of complex operations into something that can be reduced into something simple. This section contains articles over some of the scripting/interpreted languages I have used.

NAS Storage Rack

2.1 - Perl - The Swiss Army Knife for the Administrator

Discussion and techniques used in working with Perl as an administrator.

Introduction

Eventually, the administrator will need to take a pile of text oriented data and “dice and mince” it in order to evaluate it or format it into a simple report. Shell scripting is a bit crude, but handles dicing up single lines of output from some command and culling out information in order to use it in some other form.

Perl is an excellent “Swiss Army Knife” of the script oriented languages. It shines when you have the need for decomposing unstructured data in order to make sense of it and report on it. This is my first tool to go to for collapsing a collection of related data within groups of collections (e.g. parsing through LDIF or XML formatted data).

Perl is an evolving language though very mature that is still well supported. Object orientation is rather crude and a backport somewhere in version 4 or 5 though a foundational design feature in version 6. “Moose” is a package that was developed to simplify the usage of the klunky perl object orientation though version 6 (aka “Raku”) has been release and will likely deprecate “Moose”.

Perl is modular, it is highly and easily extendible. There is a very rich ecosystem at CPAN.org that actively supports packages that extend the base functional capability of Perl. The downside when comparing Perl with other robust scripting languages, is how it lags behind in the packages incorporated into the base installation that support modern needs.

My Experience

I’ve worked with Perl version 5 for a lot of years for performing data mincing operations. Most significantly I’ve interfaced with LDAP to extract directory information or used it to traverse a file system tree to perform a listing that summerize how much data is stored at each descending level with an overall summary at the top level.

I experimented with using the multi-threaded capability but found it didn’t yield much of a time saver when compared to a single thread run over the same data. It was likely a contention issue on the CPU, though it was thread ready.

Here are some other articles on Perl subjects:

2.1.1 - Evil Multidimensional Arrays

Traditionally, a developer has been conditioned to create temporary databases or temporary working files outside of program code to leverage “database oriented” operations. In Perl, the technique for leveraging arrays defined and stored inside another array is scary for the novice developer.

Introduction

It’s hard to wrap your head around having any type of array as an element inside another array. Using this technique is not impossible but requires some practice to master its use. After a few times of performing some simple applications of the technique such as multiple hashes stored in a list array whose elements are unique persons, it becomes a brain-dead technique to incorporate into your code and becomes an essential in leveraging the utility for sorting/traversing arrays. I will use this technique when the amount of data is light and relatively small and output is generally for utility reporting. Multidimensional arrays does consume memory, however memory is fairly abundant these days and can handle fairly substantive arrays.

Devil in the Details

Foundationally as a review, there are 3 types of variables:

  • Scalar - A place in memory that stores either a literal or a reference to another place in memory (i.e. another variable).
  • Array - Also known as a “list array”. An indexed referenced variable that holds one or more scalars. References are numeric and start with element “0”.
  • Hash - A key/value array whose scalar value is referenced by an associated key value. This array type is unstructured though there is the utility “sort” operation for referencing the key values in alphanumeric order (of which itself is a list array of references that is returned for parsing).

At the outset, the basic approach I use in developing Perl code is to choose one data source that will seed a key, loading into a top level array. I then parse through other data sources and appropriately build off of the initial array structure, appending more arrays as appropriate. The type of array that I construct is dependent on how I need to parse it. With this technique, it helps me to break down the “data model” into consumable pieces and allows me to focus in on a more detail level without losing perspective of the whole “virtual data” landscape.

I am big on sufficient inline documentation in the code without regurgitating the code. This is especially important with multidimensional arrays once you incorporate more than 2 array levels, I find that it is important to insert in comments to document the array structures. This has saved me time in the long term when I have to come back and maintain the code, not to mention avoid horrors for someone else maintaining your code while describing you with a continuous stream of four letter words.

Practical examples where I have incorporated array in arrays include a simple case for storing key/values out of an LDIF with where each distinguished name (dn) is stored as an element in a top level list array. Where there are non-unique object keys in the LDIF (e.g. group members in a posixgroup object class), those hash values become a list array stored as the value in a hash element.

The most complicated example was where I needed to audit the “sudo” rights a user has. I accomplished this by parsing the sudoers file and associating the user, host, group and command alias sets together referentially with utility subfunctions that would dump detail out of the related array structure for an input reference. This involved loading individual array sets according to the alias type. Reporting then became modal for associating the rulesets together logically to report by user and what hosts and what commands they can run or by host and what users were authorized to run sudo and for what commands they were authorized. There were some limitations and assumptions here (e.g. how sudo handles group based rules) that the reporting could not accommodate, but this provided an 80% solution where there was no solution.

Here are a few “how-to’s” that gives detail instruction and examples for multidimensional arrays:

2.2 - Python

Python has become popular for rapid application development due to its readability and ease to maintain the codebase. It is modular and object oriented. Like Perl, you can extend core functionality with a host of add-on modules to make calls to a database, graphic interface library or interface with back-end web applications. Python, strangely, can be used for developing a small application or a large application such as AI. Another advantage is the ease for deployment across all platforms, including MS Windows.

Overview

Having programming experience that spans 20+ years, I was watching the popularity rise for Python. Already being proficient in other versatile script oriented languages that also includes some archaic UI library integrations, why should I bother with placing Python in my arsenal? I could see that those whose experience base in the C oriented languages could to rapidly learn a C-like language such as Python without requiring compilation before execution. I dug in to get a feel for the language and discern how it differentiates with the other languages I already learned.

[more articles comming in this subject area.]

2.3 - Shell Scripting

Here are a few articles over shell scripting. Shell scripting is very basic and is the basis for executing one or a sequence of commands on an operating system.

Overview

The “Shell” is the easiest of “languages” (if one could call it that) to learn. As a staple in adminstering an OS, it is a requirement for the administrator to be able to read and code shell scripts as it is the utility most used for OS installation and configuration as well as for application installation. There are several shells available on any operating system. The shell is a basic command that is executed when a user logs into a system. The shell is the mechanism that provides an “environment” that other commands or applications are executed during a user session.

Here are some articles that discuss the two basic shell types:

2.3.1 - Bourne Shell Scripting

There are variants of the Bourne Shell. The Bourne Shell (/bin/sh) was developed by Stephen Bourne at Bell Labs. It was released in 1979 with the release of version 7 of UNIX. Although it is used as an interactive command interpreter, meaning that the program is the principal program running when they login for an interactive session. It was also intended as a scripting language that would execute like a canned program using a framework of logic that executes other programs.

Introduction

There are several Bourne Shell variants available on open systems to perform rudimentary shell scripting. The traditional Bourne Shell (“sh”) has been fading in the sunset though it is lightweight for use by the OS. I have found that for the user environment, Bash (Bourne Again Shell) is the logical choice that is packaged on all open system platforms and consistently named (/usr/bin/bash). From a scripting standpoint, though I prefer Korn, Bash is universal enough and has a similar notion of the advanced Korn features, though not as robust (e.g. smart variable substitution options/functionality).

The Korn Shell was originally developed by David Korn, until about 2000, the source code was proprietary code licensed by AT&T. In about 2000, the code was released to open source. There are two significant versions released - KSH 1988 and KSH 1993. On Linux, an open source version pdksh is roughly the 1993 version. On proprietary UNIX systems, both the traditional Korn 1988 and 1993 are supplied, though not consistently named across OS variants as an executable.

What is the Purpose of the Shell?

The shell concept was originally developed as a way for an end user to start the execution of a program and run on top of the operating system. It was also a simple way to provide a simplistic code base to perform a stacked execution of commands to produce a desired end result. If you only need simple logic branching and don’t require computational math a shell script is suitable. A common mistake that scripters often do is parsing multiple lines out of an input source that have to be considered as one (e.g. parsing through an LDIF). Another tool should be used such as Perl or Python for advanced scripting.

Arrays are basic. There is only one type available - the indexed array. To replicate a key/value oriented array, you have to use two arrays that are loaded simultaneously. In parsing the array, you have to use script logic to parse the “key” array and use that index on the “value” array. There is no ability to perform fuzzy matches. You have to condition on the whole string for a match.

One last point to make is over environmental variable scope. Variables are all in the “global scope” unlike other scripting languages. That means that the contents of a named variable is universally available throughout the script using the same name. Other languages handle named variables locally within the scope of a function unless it is declared to be global. Where one function is subordinate to another function, it’s variables are inherited from its parent.

Here are a couple good tutorial and reference guides:

2.3.2 - C Shell

C Shell is an artifact that seems to never go away. This article is a technical rant. I would not recommend using it.

Introduction

C Shell was created by Bill Joy while he was a graduate student at University of California, Berkeley in the late 1970s. It has been widely distributed, beginning with the 2BSD release of the Berkeley Software Distribution (BSD) which Joy first distributed in 1978. It has been used as an interactive shell, meaning that it provides an interface for the user to execute other programs interactively. It also was intended to be used to execute as a pre-determined set of commands executed as a single program. There has been a cult following for /bin/csh usage, mainly due to the fact those people generally don’t know any other scripting language.

I’ve had to script with C Shell. From experience, I’ve found it to be quirky, with reproducible errors that are not true and working around them by embedding a comment just to get the code to be intrepeted correctly. The shell hasn’t been updated since the 1990s though the open source community has produced a parallel version that arguably has been modernized. When working in a multi-platform environment, the open source C Shell variants do not exist on proprietary UNIX systems. I avoid using C Shell altogether though some application developers still hold onto the shell as the wrapper around their compiled binaries.

Here is an article that was published by Bruce Barnett several years ago. The points that he and other contributors have made are still valid.

Top Ten Reasons Not to Use the C Shell

Written by Bruce Barnett With MAJOR help from Peter Samuelson Chris F.A. Johnson Jesse Silverman Ed Morton and of course Tom Christiansen

Updated:

  • September 22, 2001
  • November 26, 2002
  • July 12, 2004
  • February 27, 2006
  • October 3, 2006
  • January 17. 2007
  • November 22, 2007
  • March 1, 2008
  • June 28, 2009

In the late 80’s, the C shell was the most popular interactive shell. The Bourne shell was too “bare-bones.” The Korn shell had to be purchased, and the Bourne Again shell wasn’t created yet.

I’ve used the C shell for years, and on the surface it has a lot of good points. It has arrays (the Bourne shell only has one). It has test(1), basename(1) and expr(1) built-in, while the Bourne shell needed external programs. UNIX was hard enough to learn, and spending months to learn two shells seemed silly when the C shell seemed adequate for the job. So many have decided that since they were using the C shell for their interactive session, why not use it for writing scripts?

THIS IS A BIG MISTAKE.

Oh - it’s okay for a 5-line script. The world isn’t going to end if you use it. However, many of the posters on USENET treat it as such. I’ve used the C shell for very large scripts and it worked fine in most cases. There are ugly parts, and work-arounds. But as your script grows in sophistication, you will need more work-arounds and eventually you will find yourself bashing your head against a wall trying to work around the problem.

I know of many people who have read Tom Christiansen’s essay about the C shell (http://www.faqs.org/faqs/unix-faq/shell/csh-whynot/), and they were not really convinced. A lot of Tom’s examples were really obscure, and frankly I’ve always felt Tom’s argument wasn’t as convincing as it could be. So I decided to write my own version of this essay - as a gentle argument to a current C shell programmer from a former C shell fan.

[Note - since I compare shells, it can be confusing. If the line starts with a “%” then I’m using the C shell. If in starts with a “$” then it is the Bourne shell.

Top Ten reasons not to use the C shell

  1. The Ad Hoc Parser
  2. Multiple-line quoting difficult
  3. Quoting can be confusing and inconsistent
  4. If/while/foreach/read cannot use redirection
  5. Getting input a line at a time
  6. Aliases are line oriented
  7. Limited file I/O redirection
  8. Poor management of signals and sub-processes
  9. Fewer ways to test for missing variables
  10. Inconsistent use of variables and commands.
1. The Ad Hoc Parser

The biggest problem of the C shell (and TCSH) it its ad hoc parser. Now this information won’t make you immediately switch shells. But it’s the biggest reason to do so. Many of the other items listed are based on this problem. Perhaps I should elaborate.

The parser is the code that converts the shell commands into variables, expressions, strings, etc. High-quality programs have a full-fledged parser that converts the input into tokens, verifies the tokens are in the right order, and then executes the tokens. The Bourne shell even as an option to parse a file, but don’t execute anything. So you can syntax check a file without executing it.

The C shell does not do this. It parses as it executes. You can have expressions in many types of instructions:

% if ( expression )
% set variable = ( expression )
% set variable = expression
% while ( expression )
% @ var = expression

There should be a single token for expression, and the evaluation of that token should be the same. They are not. You may find out that

% if ( 1 )

is fine, but

% if(1)

or

% if (1 )

or

% if ( 1)

Generates a syntax error. Or that the above works, if add a “!” or change “if” into “while”, or do both, you get a syntax error.

You never know when you will find a new bug. As I write this (September 2001) I ported a C shell script to another UNIX system. (It was my .login script, okay? Sheesh!) Anyhow I got an error “Variable name must begin with a letter” somewhere in the dozen files used when I log in. I finally traced the problem down to the following “syntax” error:

% if (! $?variable ) ...

Which variable must begin with a letter? Give up? Here’s how to fix the error:

% if ( ! $?variable ) ...

Yes - you must add a space before the “!” character to fix the “Variable name must begin with a letter” error. Sheesh!

The examples in the manual page don’t (or didn;t) mention that spaces are required. In other words, I provided a perfectly valid syntax according to the documentation, but the parser got confused and generated an error that wasn’t even close to the real problem. I call this type of error a “syntax” error. Except that instead of the fault being on the user - like normal syntax errors, the fault is in the shell, because the parser screwed up!

Sigh…

Here’s another one. I wanted to search for a string at the end of a line, using grep. That is:

% set var = "string"
% grep "$var$" < file

Most shells treat this as:

% grep "string$" <file

Great. Does the C shell do this? As John Belushi would say, “Noooooo!” Instead, we get

Variable name must contain alphanumeric characters.

Ah. So we back quote (backslash) it.

% grep "$var\$" <file

This doesn’t work. The same thing happens. One work-around is

% grep "$var"'$' <file

Sigh…

Here’s another. For instance,

% if ( $?A ) set B = $A

If variable A is defined, then set B to $A. Sounds good. The problem? If A is not defined, you get “A: Undefined variable.” The parser is evaluating A even if that part of the code is never executed.

If you want to check a Bourne shell script for syntax errors, use “sh -n.” This doesn’t execute the script. but it does check all errors. What a wonderful idea. Does the C shell have this feature? Of course not. Errors aren’t found until they are EXECUTED. For instance, the code

% if ( $zero ) then
% while
% end
% endif

will execute with no complaints. However, if $zero becomes one, then you get the syntax error:

while: Too few arguments.

Here’s another:

if ( $zero ) then
     if the C shell has a real parser - complain
endif

In other words, you can have a script that works fine for months, and THEN reports a syntax error if the conditions are right. Your customers will love this “professionalism.”

And here’s another I just found today (October 2006). Create a script that has

#/bin/csh -f
if (0)
endif

And make sure there is no “newline” character after the endif. Execute this and you get the error

then: then/endif not found.

Tip: Make sure there is a newline character at the end of the last line.

And this one (August 2008)

% set a="b"
% set c ="d"
set: Variable name must begin with a letter.

So adding a space before the “=” makes “d” a variable? How does this make any sense?

Add a special character, and it becomes more unpredictable. This is fine

% set a='$'

But try this

% set a="$"
Illegal variable name.

Perhaps this might make sense, because variables are evaluated in double quotes. But try to escape the special character:

% set a="\$"
Variable name must contain alphanumeric characters.

However, guess what works:

% set a=$

as does

% set a=\$

It’s just too hard to predict what will and what will not work.

And we are just getting warmed up. The C shell a time bomb, gang…

Tick… Tick… Tick…

2. Multiple-line quoting difficult

The C shell complaints if strings are longer than a line. If you are typing at a terminal, and only type one quote, it’s nice to have an error instead of a strange prompt. However, for shell programming - it stinks like a bloated skunk.

Here is a simple ‘awk’ script that adds one to the first value of each line. I broke this simple script into three lines, because many awk scripts are several lines long. I could put it on one line, but that’s not the point. Cut me some slack, okay?

(Note - also - at the time I wrote this, I was using the old verison of AWK, that did not allow partial expressions to cross line boundries).

#!/bin/awk -f
{print $1 + \
    2;
}

Calling this from a Bourne shell is simple:

#!/bin/sh
awk '
    {print $1 + \
        2;
    }
    '

They look the SAME! What a novel concept. Now look at the C shell version.

#!/bin/csh -f
awk '{print $1 + \\
         2 ;\
     }'

An extra backslash is needed. One line has two backslashes, and the second has one. Suppose you want to set the output to a variable. Sounds simple? Perhaps. Look how it changes:

#!/bin/csh -f
set a = `echo 7 | awk '{print $1 + \\\
        2 ;\\
    }'`

Now you need three backslashes! And the second line only has two. Keeping track of those backslashes can drive you crazy when you have large awk and sed scripts. And you can’t simply cut and paste scripts from different shells - if you use the C shell. Sometimes I start writing an AWK script, like

#!/bin/awk -f
BEGIN {A=123;}
etc...

And if I want to convert this to a shell script (because I want to specify the value of 123 as an argument), I simply replace the first line with an invocation to the shell:

#!/bin/sh
awk '
BEGIN {A=123;}
'
etc.

If I used the C shell, I’d have to add a \ before the end of each line.

Also note that if you WANT to include a newline in a string, strange things happen:

% set a = 'a \
    b'
% echo $a
  a b

The newline goes away. Suppose you really want a newline in the string. Will another backslash work?

% set a = 'a \\
    b'
% echo $a
  a \ b

That didn’t work. Suppose you decide to quote the variable:

% set a = 'a \
  b'
% echo "$a"
  Unmatched ".

Syntax error!? How bizarre. There is a solution - use the :q quote modifier.

% set a = 'a \
    b'
% echo $a:q
  a
  b

This can get VERY complicated when you want to make aliases include backslash characters. More on this later. Heh. Heh.

One more thing - normally a shell allows you to put the quotes anywhere on a line:

echo abc"de"fg

is the same as

echo "abcdefg"

That’s because the quote toggles the INTERPRET/DON’T INTERPRET parser. However, you cannot put a quote right before the backslash if it follows a variable name whose value has a space. These next two lines generates a syntax error:`

% set a = "a b"
% set a = $a"\
  c"

All I wanted to do was to append a “c” to the $a variable. It only works if the current value does NOT have a space. In other words

% set a = "a_b"
% set a = $a"\
  c"

is fine. Changing “_” to a space causes a syntax error. Another surprise. That’s the C shell - one never knows where the next surprise will be.

3. Quoting can be confusing and inconsistent

The Bourne shell has three types of quotes:

"........" - only $, `, and \ are special. '.......' - Nothing is special (this includes the backslash) \. - The next character is not special (Exception: a newline)

That’s it. Very few exceptions. The C shell is another matter. What works and what doesn’t is no longer simple and easy to understand.

As an example, look at the backslash quote. The Bourne shell uses the backslash to escape everything except the newline. In the C shell, it also escapes the backslash and the dollar sign. Suppose you want to enclose $HOME in double quotes. Try typing:

% echo "$HOME"
  /home/barnett

Logic tells us to put a backslash in front. So we try

% echo "\$HOME"
  \/home/barnett

Sigh. So there is no way to escape a variable in a double quote. What about single quotes?

% echo '$HOME'
  $HOME

works fine. But here’s another exception.

% echo MONEY$
  MONEY$
% echo 'MONEY$'
  MONEY$
% echo "MONEY$"
  Illegal variable name.

The last one is illegal. So adding double quotes CAUSES a syntax error.

With single quotes, “!” character is special, as is the “~” character. Using single quotes (the strong quotes) the command

% echo '!1'
  1: Event not found.

will give you the error

A backslash is needed because the single quotes won’t quote the exclamation mark. On some versions of the C shell,

echo hi!

works, but

echo 'hi!'

doesn’t. A backslash is required in front:

echo 'hi\!'

or if you wanted to put a ! before the word:

echo '\!hi'

Now suppose you type

% set a = "~"
% echo $a
  /home/barnett
% echo '$a'
  $a
% echo "$a"
  ~

The echo commands output THREE different values depending on the quotes. So no matter what type of quotes you use, there are exceptions. Those exceptions can drive you mad.

And then there’s dealing with spaces.

If you call a C shell script, and pass it an argument with a space:

% myscript "a b" c

Now guess what the following script will print.

#!/bin/csh -f
echo $#
set b = ( $* )
echo $#b

It prints “i2” and then “3”. A simple = does not copy a variable correctly if there are spaces involved. Double quotes don’t help. It’s time to use the fourth form of quoting - which is only useful when displaying (not set) the value:

% set b = ( $*:q )

Here’s another. Let’s saw you had nested backticks. Some shells use $(program1 $(program2)) to allow this. The C shell does not, so you have to use nested backticks. I would expect this to be

`program1 \`program2\` `

but what works is the illogical

`program1 ``program2``

Got it? It gets worse. Try to pass back-slashes to an alias You need billions and billions of them. Okay. I exaggerate. A little. But look at Dan Bernstein’s two aliases used to get quoting correct in aliases:

% alias quote "/bin/sed -e 's/\\!/\\\\\!/g' \\
    -e 's/'\\\''/'\\\'\\\\\\\'\\\''/g' \\
    -e 's/^/'\''/' \\
    -e 's/"\$"/'\''/'"
% alias makealias "quote | /bin/sed 's/^/alias \!:1 /' \!:2*"

You use this to make sure you get quotes correctly specified in aliases.

Larry Wall calls this backslashitis. What a royal pain. Tick.. Tick.. Tick..

4. If/while/foreach/read cannot use redirection

The Bourne shell allows complex commands to be combined with pipes. The C shell doesn’t. Suppose you want to choose an argument to grep. Example:

% if ( $a ) then
% grep xxx
% else
% grep yyy
% endif

No problem as long as the text you are grepping is piped into the script. But what if you want to create a stream of data in the script? (i.e. using a pipe). Suppose you change the first line to be

% cat $file | if ($a ) then

Guess what? The file $file is COMPLETELY ignored. Instead, the script use standard input of the script, even though you used a pipe on that line. The only standard input the “if” command can use MUST be specified outside of the script. Therefore what can be done in one Bourne shell file has to be done in several C shell scripts - because a single script can’t be used. The ' while ' command is the same way. For instance the following command outputs the time with hyphens between the numbers instead of colons:

$ date | tr ':' ' ' | while read a b c d e f g
$ do
$ echo The time is $d-$e-$f
$ done

You can use < as well as pipes. In other words, ANY command in the Bourne shell can have the data-stream redirected. That’s because it has a REAL parser [rimshot].

Speaking of which… The Bourne shell allows you to combine several lines onto a single line as long as semicolons are placed between. This includes complex commands. For example - the following is perfectly fine with the Bourne shell:

$ if true;then grep a;else grep b; fi

This has several advantages. Commands in a makefile - see make(1) - have to be on one line. Trying to put a C shell “if” command in a makefile is painful. Also - if your shell allows you to recall and edit previous commands, then you can use complex commands and edit them. The C shell allows you to repeat only the first part of a complex command, like the single line with the “if” statement. It’s much nicer recalling and editing the entire complex command. But that’s for interactive shells, and outside the scope of this essay.

5. Getting input a line at a time

Suppose you want to read one line from a file. This simple task is very difficult for the C shell. The C shell provides one way to read a line:

% set ans = $<

The trouble is - this ALWAYS reads from standard input. If a terminal is attached to standard input, then it reads from the terminal. If a file is attached to the script, then it reads the file.

But what do you do if you want to specify the filename in the middle of the script? You can use “head -1” to get a line. but how do you read the next line? You can create a temporary file, and read and delete the first line. How ugly and extremely inefficient. On a scale of 1 to 10, it scores -1000.

Now what if you want to read a file, and ask the user something during this? As an example - suppose you want to read a list of filenames from a pipe, and ask the user what to do with some of them? Can’t do this with the C shell - $< reads from standard input. Always. The Bourne shell does allow this. Simply use

$ read ans </dev/tty

to read from a terminal, and

$ read ans

to read from a pipe (which can be created in the script). Also - what if you want to have a script read from STDIN, create some data in the middle of the script, and use $< to read from the new file. Can’t do it. There is no way to do

set ans = $< <newfile

or

set ans = $< </dev/tty

or

echo ans | set ans = $<

$< is only STDIN, and cannot change for the duration of the script. The workaround usually means creating several smaller scripts instead of one script.

6. Aliases are line oriented

Aliases MUST be one line. However, the “if” WANTS to be on multiple lines, and quoting multiple lines is a pain. Clearly the work of a masochist. You can get around this if you bash your head enough, or else ask someone else with a soft spot for the C shell:

% alias X 'eval "if (\!* =~ 'Y') then \\
                    echo yes \\
                 else \\
                    echo no \\
                 endif"'

Notice that the “eval” command was needed. The Bourne shell function is more flexible than aliases, simpler and can easily fit on one line if you wish.

$ X() { if [ "$1" = "Y" ]; then echo yes; else echo no; fi;}

If you can write a Bourne shell script, you can write a function. Same syntax. There is no need to use special “\!:1” arguments, extra shell processes, special quoting, multiple backslashes, etc. I’m SOOOO tired of hitting my head against a wall.

Functions allow you to simplify scripts. Anything more sophisticated than an alias that would require function requires a separate csh script/file.

Tick..Tick..Tick..

7. Limited file I/O redirection

The C shell has one mechanism to specify standard output and standard error, and a second to combine them into one stream. It can be directed to a file or to a pipe.

That’s all you can do. Period. That’s it. End of story.

It’s true that for 90% to 99% of the scripts this is all you need to do. However, the Bourne shell can do much much more:

  • You can close standard output, or standard error.
  • You can redirect either or both to any file.
  • You can merge output streams
  • You can create new streams

As an example, it’s easy to send standard error to a file, and leave standard output alone. But the C shell can’t do this very well.

Tom Christiansen gives several examples in his essay. I suggest you read his examples. See http://www.faqs.org/faqs/unix-faq/shell/csh-whynot/

8. Poor management of signals and subprocesses

The C shell has very limited signal and process management.

Good software can be stopped gracefully. If an error occurs, or a signal is sent to it, the script should clean up all temporary files. The C shell has one signal trap:

% onintr label

To ignore all signals, use

% onintr -

The C shell can be used to catch all signals, or ignore all signals. All or none. That’s the choice. That’s not good enough.

Many programs have (or need) sophisticated signal handling. Sending a -HUP signal might cause the program to re-read configuration files. Sending a -USR1 signal may cause the program to turn debug mode on and off. And sending -TERM should cause the program to terminate. The Bourne shell can have this control. The C shell cannot.

Have you ever had a script launch several sub-processes and then try to stop them when you realized you make a mistake? You can kill the main script with a Control-C, but the background processes are still running. You have to use “ps” to find the other processes and kill them one at a time. That’s the best the C shell can do. The Bourne shell can do better. Much better.

A good programmer makes sure all of the child processes are killed when the parent is killed. Here is a fragment of a Bourne shell program that launches three child processes, and passes a -HUP signal to all of them so they can restart.

$ PIDS=
$ program1 & PIDS="$PIDS $!"
$ program2 & PIDS="$PIDS $!"
$ program3 & PIDS="$PIDS $!"
$ trap "kill -1 $PIDS" 1

If the program wanted to exit on signal 15, and echo its process ID, a second signal handler can be added by adding:

$ trap "echo PID $$ terminated;kill -TERM $PIDS;exit" 15

You can also wait for those processes to terminate using the wait command:

$ wait "$PIDS"

Notice you have precise control over which children you are waiting for. The C shell waits for all child processes. Again - all or none - those are your choices. But that’s not good enough. Here is an example that executes three processes. If they don’t finish in 30 seconds, they are terminated - an easy job for the Bourne shell:

$ MYID=$$
$ PIDS=
$ (sleep 30; kill -1 $MYID) &
$ (sleep 5;echo A) & PIDS="$PIDS $!"
$ (sleep 10;echo B) & PIDS="$PIDS $!"
$ (sleep 50;echo C) & PIDS="$PIDS $!"
$ trap "echo TIMEOUT;kill $PIDS" 1
$ echo waiting for $PIDS
$ wait $PIDS
$ echo everything OK

There are several variations of this. You can have child processes start up in parallel, and wait for a signal for synchronization.

There is also a special “0” signal. This is the end-of-file condition. So the Bourne shell can easily delete temporary files when done:

trap "/bin/rm $tempfiles" 0

The C shell lacks this. There is no way to get the process ID of a child process and use it in a script. The wait command waits for ALL processes, not the ones your specify. It just can’t handle the job.

9. Fewer ways to test for missing variables

The C shell provides a way to test if a variable exists - using the $?var name:

% if ( $?A ) then
% echo variable A exists
% endif

However, there is no simple way to determine if the variable has a value. The C shell test

% if ($?A && ("$A" =~ ?*)) then

Returns the error:

A: undefined variable.

You can use nested “if” statements using:

% if ( $?A ) then
% if ( "$A" =~ ?* ) then
% # okay
% else
% echo "A exists but does not have a value"
% endif
% else
% echo "A does not exist"
% endif

The Bourne shell is much easier to use. You don’t need complex “if” commands. Test the variable while you use it:

$ echo ${A?'A does not have a value'}

If the variable exists with no value, no error occurs. If you want to add a test for the “no-value” condition, add the colon:

$ echo ${A:?'A is not set or does not have a value'}

Besides reporting errors, you can have default values:

$ B=${A-default}

You can also assign values if they are not defined:

$ echo ${A=default}

These also support the “:” to test for null values.

10. Inconsistent use of variables and commands.

The Bourne shell has one type of variable. The C shell has seven:

  • Regular variables - $a
  • Wordlist variables - $a[1]
  • Environment variables - $A
  • Alias arguments - !1
  • History arguments - !1
  • Sub-process variables - %1
  • Directory variables - ~user

These are not treated the same. For instance, you can use the :r modifier on regular variables, but on some systems you cannot use it on environment variables without getting an error. Try to get the process ID of a child process using the C shell:

program &
echo "I just created process %%"

It doesn’t work. And forget using ~user variables for anything complicated. Can you combine the :r with history variables? No. I’ve already mentioned that quoting alias arguments is special. These variables and what you can do with them is not consistent. Some have very specific functions. The alias and history variables use the same character, but have different uses.

This is also seen when you combine built-ins. If you have an alias “myalias” then the following lines may generate strange errors (as Tom has mentioned before):

repeat 3 myalias
kill -1 `cat file`
time | echo

In general, using pipes, backquotes and redirection with built-in commands is asking for trouble, i.e.

echo "!1"
set j = ( `jobs` )
kill -1 $PID || echo process $PID not running

There are many more cases. It’s hard to predict how these commands will interact. You THINK it should work, but when you try it, it fails.

Here are some more examples. You can have an array in the C shell, but if you try add a new element, you get strange errors.

% set a = ()
% @ a[1] = 2
@: Subscript out of range.

So if you wants to add to an existing array, you have to use something like

set a = ( $a 2 )

Now this works

@ arrayname[1] = 4

but try to store a string in the array.

@ arrayname[1] = "a"

and you get

@: Badly formed number.

Another bug - from Aleksandar Radulovic - If the last line of the C shell script does not have a new line character, it never gets exeucted.

I just discoveed another odd bug with the C shell - thank’s to a posting from “yusufm”:

Guess what the following script will generate

setenv A 1
echo $A
setenv A=2
echo $A
setenv B=3
echo $B
setenv B=4
echo $B

I’m not going to tell you what the bug is, or how many there are. I think it’s more fun to let you discover it yourself.

I can add some more reasons. Jesse Silverman says reason #0 is that it’s not POSIX compliant. True. But the C shell was written before the standard existed. This is a historical flaw, and not a braindead stupid lazy dumb-ass flaw.

In Conclusion

I’ve listed the reasons above in what I feel to be order of importance. You can work around many of the issues, but you have to consider how many hours you have to spend fighting the C shell, finding ways to work around the problems. It’s frustrating, and frankly - spending some time to learn the basics of the Bourne shell are worth every minute. Every UNIX system has the Bourne shell or a super-set of it. It’s predictable, and much more flexible than the C shell. If you want a script that has no hidden syntax errors, properly cleans up after itself, and gives you precise control over the elements of the script, and allows you to combine several parts into a large script, use the Bourne shell.

I found myself developing more and more bad habits over time because I was using the C shell. I would use

foreach a ( `cat file` )

instead of redirection. I would use several smaller scripts to work around problems in one script. And most importantly, I put off learning the Bourne shell for years as I struggled with the C shell. Don’t make the same mistake I made.

3 - Data Storage

Call me simple. One of my underlying philosophies in technology is to use the K.I.S.S. (keep it simple stupid) principle. I have found from practical experience that managing the data itself is a unique area for administration when compared to managing the application/service accessing the data. The best way I have found for managing data is to store it on dedicated servers that only perform data services across the network.

Introduction

NAS Storage Rack I have heard arguments in the past that data can only be reliable when served off of the server that is accessing the data. Data performance was another argument. I was leveraging data delivered over IP/Ethernet before those antagonists were coerced into coming to terms with efficient management of large data.

Here are topics related to data storage and managing data on a network.

3.1 - Shared Data Stored on Network Attached Storage (NAS)

A short discussion on the use of a network attached storage (NAS) in a networked computing environment.

Introduction

NAS Storage Rack I have worked in shops where there are a significant number of high end technical workstations deployed throughout the enterprise all needing to share the same application related data. Most open systems shops seem to only service one, possibly two platforms and all the related data is only accessed by a single host. Fifteen years ago NAS appliances became of age and provided for better ways for managing data apart from just consuming it and dealing with both the logical and physical limits that localized block storage traditionally provided.

Advantages of NAS:

  • Centralized data management consumed by all hosts on the network.
  • Backups can be offloaded onto other hosts than where the data is being consumed.
  • Strategic component for supporting a disaster recovery strategy.

Disadvantages of NAS:

  • Potential for less performance. This has been highly debated. There are trade-offs here in that you at the end of the day, you have a pipe and multiple layers between physical storage and the presentation of a file system in some form. An example from the past that is applicable here is that of a sound system. If any component (microphone, cables, sound board, speaker) from the source to the speaker is inferior, so will the sound quality.

  • Consumption over a network is configured by a set of rules for what host and which user can access the data. It is harder, though not impossible, to manage where that data is being consumed.

The advantages outweigh the disadvantages in my book. Any potential performance degradation (I’ve actually seen better performance when tuned correctly) is minuscule and is outweighed by the efficient centralized management of the data itself, particularly for backups and disaster recovery. Managing data is a universal issue that takes good architectural design to provide a system for defining responsibility and accountability beyond how the data is consumed.

3.2 - TrueNAS at Home

My use of a NAS solution for home and its rationalization.

Introduction

While our kids were growing up, administering upward to 8 computers at home, most of which were running Windows XP at the time was onerous for me. I would have one Saturday a month applying updates, scanning for viruses, malware and re-orging the disks. What we lacked was data backup.

The Test

I put my kids to the test as to why they needed to run Windows. I put a Fedora Linux desktop out in our game room and had the kids do their work and play on that machine and to tell me what software they needed that would only run on Windows. After a month, we discovered that there was no need to run Windows. OpenOffice and Firefox would work for all their school related tasks and the games they played were all in Flash or Java off the web.

After the expirament was over, I repurposed all the other Windows based desktops with Fedora Linux (outside of my wife’s desktop – that took more convincing). I implemented NIS for identity management and deliver automount maps along with serving all the home directories off of my desktop via NFS. Standing up an LDAP server wasn’t worth the hassle for such a small user base and security wasn’t that critical. I later bought a full height computer that I could stuff with several hard disks and imaged it with FreeNAS, a BSD UNIX OS running ZFS as the file ssystem developed by iXsystems. iXsystems provide a commercial hardware/software solution suitable for small to possibly medium businesses.

The Result

It worked real well for delivering data over NFS to all the desktops. The only residual problem I then had to deal with was the whining of the kids as to which desktop they wanted to use. Performance-wise, the desktops had different processor classes and memory footprint. Of course the kids only wanted to use the one machine that was the fastest! NFS performance was never an issue.

Today, iXsystems have consolidated their community version of FreeNAS as TrueNAS Core. TrueNAS utilizes the BSD UNIX “jails” to provide more services than merely supplying a filesystem over the network. I am currently working with a packaged NextCloud jail to see how I like being able to access files from tablets and phones.

For backups, I simply perform an rsync of the data onto a USB drive. So far, this is possible since I have less than 2TB of data. Using a Linux based backup software did not make sense to add complexity into mix for such small amount of data.

You can read more about TrueNAS at (https://www.truenas.com/)

4 - Web Related

In our modern world, one cannot escape the use of web based resources. Our computing resources have steadily been migrating into the cloud space, accessed through an application that is delivered by the web browser. I have lived through the initial version of HTML and the pains it took to create a compatible web site that was compatible between the major browsers to only wind up with a web site that was crude at best. In today’s world, standing up a web server can be done by following a recipe off the ‘net. Creating your own website can be done in a variety of ways: ground up construction, CMS (Content Management Service) tools that utilize a backend database and theme template to dynamically deliver content and now tools that can create static content. This section contains articles over my experiences in working with web-based technologies.

Introduction

I have never been overly excited about coding HTML to create a website. It takes a lot of time to develop the look and feel. I preferred to concentrate on the content instead.

I had created a simple website using XHTML years ago. It was OK and was functional, but it was really not that attractive. I then migrated to Joomla!, a CMS (content management system) solution for a number of years. It took work to integrate a theme, but was fairly easy to create content.

I still wasn’t overly content with how it formatted my website. I also wasn’t thrilled at having a backend database to maintain as well, though I backed it up and kept a local copy. When I went to update and overhaul my website this past year, I looked at going backward to create a static website using a packaged theme that took care of the CSS whether it was a free template or for purchase. That was OK, but I found I was having to hack to get the navigation to be a cascading multi-level menu. Most templates these days assume a one page “resume” style.

I ran across an article at How-to_Geek that featured Hugo, a static site generator (SSG). This seemed to be the best of both worlds to read “markdown” code that could be intermingled with HTML markup to create my new website. This website was created using Hugo and is working well more my needs today.

The Web Model

4.1 - A Website with Joomla

I used Joomla in the past to dynamically present my web content. Joomla is a framework that incorporates a templated theme, a back-end database and the ability to extend functionality through extensions. This article doesn’t necessarily gives a recipe to install, but more so, provide information you likely won’t readily find elsewhere.

Overview

Joomla! Joomla is a PHP framework that stores content and configuration in the backend database MySQL. Joomla's framework is a modular system where much of its functionality is plugged in by packaged extensions. Joomla excels at its flexibility for managing content (aka articles) stored as entries in the database. There is a web administrative interface that provides a graphical text editor to input articles for "contributors" and features for the "administrator" to manage the installation. There is a security interface for accessing the administrative and alternatively the public interface from across the web.

Why did I choose Joomla versus WorkPress or Drupal? At the time, Joomla was the best documented to install, configure and maintain. Even at that, Joomla to the newbie can be onerous to get stood up and make functional. If you venture beyond the stock demo theme, you have to get to know the internals of the theme to know how to configure the web site to display its content correctly. One theme might support 3 columns, another only 2. One theme may use one name to identify an area of the page while another could call it something else. Generally, theme based documentation tend to be scant.

Why have I moved from Joomla a CMS to Hugo/SSG (static site generator)? All my content doesn’t tend to change. I don’t have non-technical contributors who need a user-friendly interface. Should I have the need for ecommerce, I’d likely incorporate that function on a Joomla installation with an ecommerce plugin and referentially point to that server from out of the static content.

Terms and Concepts

It took time to understand the design concept of Joomla. First feat was to learn basic terms used by Joomla.

Components

Content elements or applications that are usually displayed in the center of the main content area of a template. This depends on the template design.

Components are core elements of Joomla’s functionality. These core elements include Content, Banners, Contact, News Feeds, Polls and Web Links. There are third party components which are available through https://extensions.joomla.org.

Plugins

A plugin is a small, task oriented function that intercepts content before it is displayed and manipulates it in some way (e.g. WYSIWYG, authentication).

Modules

A module extends the capabilities of Joomla giving the software new functionality. Modules are small content items that can be displayed anywhere that your template allows it to be displayed by assigning them to positions and pages through the module manager. You can find other modules at https://forge.joomla.org. Here some things to note about modules:

  • Modules not enabled will not display.
  • Modules can be assigned to unused positions (positions not in the template) if you want to have them published but not displayed in a position (e.g. display a module in content using {loadposition}.
  • Multiple modules may be assigned to the same position. They will be displayed in the order shown for modules in that position in Module Manager.
  • If you want to display a module in more that one position, use the Module Manager to create another copy of it.
Positions

Site templates divides the “pages” displayed on a site into a series of “positions”, each with a different name.

You can add/remove positions by modifying the index.php. You assign a module to a position using the Module Manager. Positions must be defined in templateDetails.xml.

Sections and Categories

Sections and categories allow you to organize content items/articles. Sections contain one or more category(ies). A page may contain one or more sections. Each article is associated to a named category. Where is appears on your website depends on the matching of a category assigned to a page equaling the category assigned to the article.

Using menus, you can link directly to sections, categories and content items. You can also select numerous options for the display of content associated with each type of link.

Content Items/Articles

Content items/articles are what you think as web pages in the traditional HTML markup sense.

Articles

Access to articles you can choose fromo section, category, archive, articles, front page. Within a section, category and archive, you can choose “list or “blog” layouts.

Blog Layout

Blog layout will show a listing of all articles of the selected blog type (section and category) in the main body position of your template.

List Layout

Table layout will simply give you a tabular list of all titles in a particular section or category.

Wrapper

Allows you to place stand alone applications and third party websites inside your site (frame). It is defined using the “mainbody” tag.

Each Component

Each component has its own link

A link to an external site

Separator

Just a line used for separating items in the menu itself.

Alias

An alias lets you make a link matching an existing menu item.

Front Page

By default is the “home page”.


Favorite Extentions

  • Akeeba Backup - Essential tool to have to get a backup of your web content.
  • Phoca Photo Gallery - A great plugin to handle your photo in a gallery style. Phoca has other extentions that are really worth while.
  • DJ Image Slider - Supplies the automated image slider for a main page.
  • J2 Store - I never used this plugin, but considered implementing it. This extension is well designed and actively supported. It’s drawback is no integration for PayPal or Amazon payments.

4.2 - Creating a Website with Hugo

Automation never seems to find the end of the road. When you think something routine can’t be automated in another way, BAM!, here it comes. Hugo is a static site generator that uses the “Go” lanuguage - a “markdown” language. It was developed by Steve Francia in 2013 and continued development by Bjørn Erik Pedersen since 2015. Hugo is a system that takes tagged text (aka “markdown”) and creates a static (meaning that webpages are not created on the fly generally by software running on the web server that dynamically formats the HTML sent to the browser, but does not change) web pages. This article gives an overview of my experience developing this website using Hugo. Hugo’s home is at https://gohugo.io/.

Overview

Hugo

I read this article on How-to-Geek and was intrigued by the concept of producing a static website without the hassle of coding HTML5 markup, fiddle-farting around with the related CSS. This is how I created this website using Hugo software framework and the Go markdown language. I’ve been quite impressed with its simplicity and my ability to concentrate on the content. I really like the feature for initiating a pseudo web server that dynamically rebuilds the static pages that you can view in a web browser as you save content in a file along with integration into GitHub.

I started with Youtube videos to get a jump on the basics for developing my website using Hugo. Most of what I watched were produced by Chris Stayte and Mike Dane. I used these references in getting started:

Prerequisites

  1. Opened account at https://github.com. Installed the Git package on my Fedora desktop.
sudo dnf install git

I also set up my project on GitHub.

  1. Installed Hugo package on my Fedora desktop
sudo dnf install hugo
  1. Downloaded the atom workbench package from (https://atom.io) and installed it.
sudo dnf install atom.x86_64.rpm

I played with atom a bit, but wound up just text editing the file from the command line using the vi editor. Gedit might be more flexible to use due to positioning within a paragraph is difficult using vi.

Setup Website Project for Hugo

cd /path/to/hugo.devlt
hugo new site awolfe.info
cd awolfe.info
git init
git add .
git status

Installed Theme

The Docsy installation guide gave instructions for installing hugo-extended and a couple other packages. The Hugo package on Fedora contained the “extended” feature set and did not need the other packages mentioned to be installed. I wound up needing npm, but I installed it later.

cd /path/to/hugo.devlt/awolfe.info/themes
git clone --recurse-submodules --depth 1 https://github.com/google/docsy.git

To test the theme installation I did:

cd docsy/userguide
hugo server --themesDir ../..

When the Hugo server started, it listed a table of how many pages were built and gave me the URL to access the website. I was able to verify the example (userguide) worked though Hugo but it complained over:

found no layout file for "JSON" for kind `home`:  You should create a template file which matches Hugo layouts Lookup Rules for this combination.

This issue was due to the config.toml file not needing the JSON on the “home” spec.

[CTL]-C on the command line stops the Hugo server. Starting off, I had two instances of Hugo server running. The first starts up on port 1313 with the second one starting on some random port. After running the server for a while, or when I encounter some odd issue, I found I had to kill the server and restart it to correct the issue.

Initial Configuration of My Website

NOTE

Hugo’s configuration begins with the config.toml file and certain subdirectories. Hugo is fueled primarily by the themes installation. If you need to modify some component in the “themes” directory, you copy it out to your website project directory and modify it. Anything in your website project subdirectories overrides what is in the “themes” directory. This is purposeful, in that having the theme cloned from its project source at GitHub, you withstand a future update of the theme tromping over your modifications.

To start off, I copied the config.toml file from out of the userguide directory from the Docsy theme into my website configuration. I then changed the obvious suspects.

baseURL = "/"
title = "Allan Wolfe"
themes = ["docsy"]

Creating Menu Structure and Articles

Optionally, you can modify the default front matter template at archetypes/default.md. You can create a new page by:

hugo new pagename.md

I wound up just copying an _index.md and going at it for the next page.

NOTE

Hugo looks for an _index.html in the root directory of your content directory. The root page is treated differently than the subordinate pages. For each section, there is an _index.md file in the subdirectory that supports your section landing page.

The structure of your website is defined in terms of a subdirectory structure, each with an _index.md or index.md in its directory. (I found with the Docsy theme, only the _index.md formatted the page correctly.) You create a subdirectory as you organize multiple articles displayed on a single section page. Those .md (markdown) files are named something other than _index.md or index.md.

My default markdown file looked something like this for top menu level:

---
title: "Blue Skies"
linkTitle: "Blue Skies"
date: 2018-02-15
menu:
  main:
    weight: 10
---

and this for sub sections and articles:

---
title: "Some Prolific Article"
linkTitle: "Some Prolific Article"
weight: 100
date: 2021-02-16
description: >
  Some description to appear on the referencing page and in the article.
---

NOTE

I saw the term “front matter” used. I learned that this term originated with the idea of writing a book. Front matter in book publishing means simply the first sections of the book such as the title page, copyright page and table of contents. “Back matter” means those sections at the end of a book such as the index or appendix. Front matter here is merely a reference to the YAML header of a markdown file (i.e. the key/value pairs between the “---” markers.

NOTE

It is worth mentioning that the markdown files can be in the TOML, JSON or ORG format. The difference is the use of delimiters.

All of the contents of the article is placed below the second “---” marker. Go will recognize either HTML markup or Go markdown notation in the content area below the second “---” marker. This is a really great flexible feature. You present the content in the easiest way possible and only complicate it as needed (eg, inserting in markup for audio control to embed playing an audio file). The Go language doesn’t provide for all of what you can do in HTML markup but is preferred when possible – K.I.S.S. (keep it simple stupid) principle.

One frustration I had was figuring out why the format of my pages were not what I saw in the Docsy example code. Since I went with my own menu item names, I had to copy out the particular themes/docsy/layouts subdirectory for the format I wanted into my project’s layout with the directory name the same as the menu item directory name.

For the document based menu items, I copied over themes/docsy/layouts/docs directory. Since the “Blog” was the same name as what was defined in the theme layout, I didn’t need to copy it over.

cd /path/to/hugo.devlt/awolfe.info
cp -R ./themes/layouts/docs ./layouts/technology

The static directory is reserved for files that are referenced but not interpreted (e.g. image files, but no markdown or HTML files). You can create subdirectories to better organize such things as image or pdf files for inclusion in your markdown or for download. In your markdown, you can referentially reference these files beginning with “/” followed by the subdirectory you created. “static” is not in the path of your relative reference.

Another feature of Go/Hugo is the ability to create your own code snippets called “shortcodes”. Shortcodes are stored at layouts/shortcodes and referenced in your markdown using double curly brackets. See the Hugo documentation for more information.

Enabling the Search Function

Of all the work done to stand up my website, enabling search was the most confusing. Hugo provides some built-in integration with some commercial search engines for the private space such as Google’s GCSE or Algolia. I chose to go with Lunr.js since my web server is small and I didn’t want to tax it with cranking out search results. I preferred to decentralize it to the client since my index file would be relatively small. Hugo provides support for Lunr.js, but the documentation could be improved. There is also a dependancy here on the theme implements the function. I used the Docsy theme and it had its own documentation that provided some different perspective on how it integrated search into its theme and how to configure for it. Here is what I did to get it to work for me.

First, NPM (a javascript package manager) is needed to be installed in order to install certain packages that Hugo needs for producing an index file for downloading to the client. (I also found it needed some additional packages to publish the static content).

On Fedora Linux:

sudo dnf install -y gcc c++ make
curl -sL https://rpm.nodesource.com/setup_10.x | sudo -E bash -
sudo dnf install npm
node -v      ## check that node installed correctly
npm -v       ## check that npm installed correctly

Gulp was needed to generate the index in JSON format. To install Gulp:

sudo npm install gulp-cli -g
gulp -v      ## Check that gulp installed and functions

Modified the Hugo project config.toml:

cd /path/to/hugo.devlt/awolfe.info
vi config.toml

Commented out the GCSE ID, turning off Algolia and enabling lunr.js:

# Google Custom Search Engine ID. Remove or comment out to disable search.
#gcs_engine_id = "011217106833237091527:la2vtv2emlw"

# Enable Algolia DocSearch
algolia_docsearch = false

# Enable local search (i.e. client side search using lunr.js))
offlineSearch = true
offlineSearchMacResults = 25

Gulp is used to create the search***.json file in the build of the /public static files. The guides I read instructed to manually run Gulp before compiling the static files with Hugo. It wasn’t worth the effort to manually create the index search file since Hugo does that as part of the build process anyway.

Build Static Pages

Publishing the static pages was fairly simple. I experienced some whining out of Hugo that required adding some extra packages using npm to install them.

cd /path/to/hugo.devlt/awolfe.info
npm --save-dev gulp postcss postcss-cli autoprefixer hugo-search-index
hugo 

The output defaults to a ./public directory. The npm --save-dev saves the package to the project directory node_modules directory. In trying to install globally produced some errors that I didn’t have just installing locally. I’m not creating multiple websites anyway.

After publishing, starting the Hugo server again, I tested the search feature and reviewed the structure and content of my web site before copying the /public directory to my web server.

GitHub Push

Update your GitHub repository:

cd /path/to/hugo.devlt/awolfe.info
git push -u origin myproject

Final Notes

  • If you wish to exclude a page from the search index file, add this line to the front matter of the .md file:

exclude_search: true

  • If you wish to mark an article as “draft”:
draft: true

Other References

Ecommerce on Hugo

Photo Gallery in Hugo

5 - Service Now

ServiceNow is an American software company based in Santa Clara, California that develops a cloud computing platform to help companies manage digital workflows for enterprise operations, and was also recognized for innovation. Starting out as Glidesoft, the founders created a developer’s workbench to create, maintain and integrate a cloud/web based software to handle both IT and enterprise workflows. Their innovation was not understood in the marketplace. After lacking success selling this developer’s workbench, they decided to use it to produce and application. They created an integrated ITIL (Information Technology Infrastructure Library) service management (ITSM) application to demonstrate their workbench capabilities. The market was hungry for software to handle IT related workflows. It took off and since then they have since branched to other workflows including HR, Legal and IT asset management (ITAM) workflows. This workbench is exellent to create task based workflows by developing code snippets within their framework, all of which is stored as “configuration” style data in a database, including the database definition.

Overview

ServiceNow Ten years ago, I was skeptical whether "the cloud" would actually emerge as a viable solution. For starters, you ask someone what is "the cloud" and you'll get different answers based on some broad perspective for how they view the use of the term. Ultimately to me, "the cloud" translates into whatever it takes to deliver a managed service by an outside service provider. The key here is on service. Today, that means minimally a server, storage along with the related administrative service for upkeeping the backend (Infrastructure as a Service - IaaS). Taken a step further, there has to be a software service on the frontend, otherwise, there is no use for the service. That software service can also be a part of a cloud offering which also provide administration, development and support (Software as a Service - SaaS) that the client may or may not maintain its configuration.

ServiceNow is an interesting offering in that it is a combination of IaaS and SaaS, whose infrastructure is dynamically redundant over 4 instances with software running on top of their managed infrastructure supporting mainly service management functionality aimed at both IT as well as other business service functions. ServiceNow is a current market leader in “the cloud” space providing an Iaas/SaaS solution to business.

When ServiceNow began as solely a developmental framework where they innovatively created a foundational UI and utility set which provides a consistent method for reading a comprehensive set of configuration rules in terms of database data definition and data rules but includes predominantly javascript code for managing the data and functional service on both the client and server. It was originally brought to market as a simplified web development tool for developers to develop over in order for the developer to focus on functionality rather than being distracted by recreating a utility UI. Not having much success selling it as a dev tool, they needed a use case in order to better market the system’s capabilities. First implementation of ServiceNow was over ITIL support. It was this demonstration implementation that brought success to ServiceNow.

This section highlights work and thoughts related to application architecture and development over the ServiceNow framework. This development framework uses the Glide system that has set the standard for other cloud service providers to play catch up in terms of robust utility, flexibility which translates into minimal time to implement new applications that fit the enterprise. I have found that I can even use this developmental platform as a communication tool in a requirements gathering meeting to just go ahead and stub out basic data and functional requirements as those requirements are being defined. Sweet!

5.1 - Administration

Topics for administrating the ServiceNow installation.

5.1.1 - Managing Base Code and Configuration

ServiceNow provides baseline ITIL functionality as their stock service offering to customers. It’s functionality can be customized to suit your own requirements or be easily extended to provide for other service management functionality that the business needs to manage outside of IT. With the ability to highly customize baseline “out-of-box” functionality, it presents a dilemma when performing a major version upgrade or applying ServiceNow mandated patches (i.e. upgrades with inclusive bug fixes and feature enhancements as a holistic installation package). There is no facility for managing deltas outside of the system for applying major version upgrades/“patch update”. This article discusses the shortcomings and some lessons I have learned in maintaining client facing code in ServiceNow.

Overview

Having inherited an installation performed by SN service division installers/developers, sadly, I was faced with a heavily customized installation. They did commit the level of customization according to the requirement of the client project team. They failed to forewarn what the impact on maintenance would be. Upgrades are mandated on a quarterly basis. It was apparent that though the SN installation developers were highly competent, it was equally apparent that they had not battled the ongoing mandated quarterly upgrades (aka patches). In a company I had worked for, on each upgrade/update effort, there were some 1200 exceptions that are generated out of the upgrade process. These exceptions related to ITIL related applications with a few out of ITAM that came “out-of-box”. After filtering out the exceptions that I didn’t care about such as UI mods, workflows, email templates, there were still some 300+ exceptions that required manual review and remediation over code/configuration difference (e.g. Business Rules or Script Includes).

At the time of this writing, no utility is available that evaluates at a high level, the impact of code change to baseline. Note that these exceptions are only generated over baseline and only appear as a function for performing an upgrade. Any custom extension that was created was not affected since there is no baseline equivalent to compare with. This showed me that creating custom apps were really a good option since baseline is not affected. I also saw that it could be better to possibly consider creating custom ITIL based apps (e.g. Change) in order to better manage baseline and controlling functional enhancement where the business requirements were substantially different than baseline they supplied. The only issue I saw on custom apps was around parsing the DOM hive on a major version upgrade resulting from having to work around a client-side bug that SN had yet to fix. SN seems to regularly change the DOM structure.

Lessons Learned

Here are lessons learned from my experience working with upgrades and managing the code base:

  • Maintain adherence to base as close as possible and document in both code and capture the functional/structural deviations and justifications in an architectural style master repository/document. This way others will understand where to look for the skeletons. If your organization is large enough, possibly have a technical review and approval process over justifications on deviations with additional approvals on major deviations.
  • If functionality of the baseline installation has to be extensively modified to cover your requirements, extend the tables creating your own custom application. You can always revert back to baseline should SN incorporate your functionality as baseline in the future. (One simple example was in the Aspen release, Change only provided a single CI relationship to a Change Request. It was custom to create and related table of CIs and modify the rulesets for the related table to replace the baseline single relationship. This functionality became baseline in about the Eureka release. It would have been better to extend the task table to provide the extensive deviations from baseline to cover the business requirements.)
  • Provide inline documentation as a developmental and administrative standard to explain the deviation and why. This has always seemed rather basic to me, but surprisingly this discipline is not practiced by modern developers.
  • Try to move the mod into a custom Script Include or Business Rule that overrides or is executed in concert with baseline. Having separated out custom code at upgrade time helped in cutting down exceptions in future upgrades/updates.
  • ServiceNow is not an “end-all” solution for every business need even for service management. If the business process does not conform to the service management functional process flow, don’t try to make underlying core functionality conform to a single use case. Likely, you need to develop it yourself using another developmental framework. I’ve seen how the Service Catalog, which is a basic, generic service management task be perverted and provided limitations over the ability to upgrade and incorporate new generic functionality that the user community desired. This could not be done since it was a structural modification that required all of the Service Catalog to be reworked.

See Recurring Maintenance as an example where it was beneficial to extend baseline and create my own custom app leveraging on the baseline code.

5.2 - Architecture and General Notes

ServiceNow architecture considerations and other considerations.

Introduction

I’ve worked with the ServiceNow system for about 3 years now. ServiceNow is a combination of a SaaS and IaaS sytem that provides out-of-box functionality for ITIL and general service management functionality.

I’ve worked on development frameworks in older systems of the past. The Glide framework underneath the ServiceNow platform is truly revolutionary for bringing software development into the 21st century. It is well thought out where the developer can concentrate mostly on process rather than re-invent rudimentary code to support application aspects such as navigation and UI form presentation. This is all supplied as part of the core framework. The customer/developer facing elements are stored as configuration elements, including data definition. The customer/developer has no direct access to the backend database. All data definition is stored as meta data using the same semantics into the database as any application developed on top of the platform itself.

Database records are displayed in one of two formats: “list view” where a number of records are viewed in a column/row format and “UI view” where fields on a single record is displayed and possibly a list of related records displayed in the parent record’s view.

Other backend features include a job scheduler, mail server, event handler, integrations using SOAP, REST, incoming email parser, and “runbook”.

5.2.1 - To "Service Catalog" or Not

The Service Catalog operates as a place where an end user can order something, whether it is physical or a request for service. In its basic form it is presented much like a product and handled as when you buy som e product off of Amazon. Behind the “product” is a workflow attached to process its fulfillment. Service C atalog is a great tool, but not the answer to all situations. This article talks about what it does well an d where it does not fit.

Introduction

In a nutshell, the Service Catalog is a framework that was originally built to simplify creating and presenting products (whether a service or a physical product) from a catalog and attach an execution workflow behind the individual product. The framework mimics the e-commerce “shopping cart” experience such as Amazon. At a company that I used to work for, I was thrown into the aftermath of the initial installation projects that included the Service Catalog to organize and clean up the hurried installation as well as standardize and manage the growth over the ServiceNow service management solution. I saw first hand how not to implement the Service Catalog. So when should you leverage the Service Catalog or roll your own by extending the Task table?

The Service Catalog Framework

The basic process flow of the Service Catalog is to select one or more items the client desires that are added to a “cart”. When the order is submitted, approvals are processed over the entire cart aka the “request”. After this initial approval, the “requested items” begin their respective workflows that handles the particulars for fulfillment of each item ordered. (SN has enhanced functionality a bit to include a “one step” submit process. The same overall workflow is still performed, the customer just doesn’t see the cart.)

This framework works well for physical products selectable from a catalog and placed in a shopping cart. Where you have an intangible service offering, it isn’t so straight forward. At the heart of a service request, it is a task that has a workflow managing its fulfillment. Approvals may or may not be needed. A cart is probably not needed since the customer is only interested in a single service request to be performed. From a developmental standpoint, the question here is whether you extend the Task table for another application or do you create another service catalog item to collect and process a service request?

Some other features to consider from the Service Catalog perspective is whether a service request is intended to be part of a broader process such as new employee provisioning. Reporting is another consideration. In a more recent version, variable reporting was introduced. As of the Fuji release, reporting on variables required open and broad read access to the variables related tables. There is no granularity over read access for reporting. Reporting users would have to be all enabled or not. Depending on the privacy of the information stored as a variable, you may not want to enable variable reporting.

So Do I Roll My Own or Use the Service Catalog?

From practical experience, reporting has traditionally driven whether I extend the Task table or simply configure another service catalog item. I was not comfortable to open access on the variables table so that the basic “itil” roled user could report on variables.

Below are some lessons I learned from contending with a cleanup over the Service Catalog.

  • Do not modify the Service Catalog framework itself (namely the business rules) under the Request and Requested Items tables. REPEAT: DO NOT MODIFY THE SERVICE CATALOG FRAMEWORK ITSELF. Use it as it comes. Service Catalog has matured and seems to continue to evolve. Making mods to the SC framework will make it difficult to upgrade and enable new features as the baseline software is enhanced over time.
  • Standardize approval workflows. Consider making “template” workflows to get you started on a new workflow, or experiment with subflows to normalize approval workflows.
  • Maintain a separate product catalog appropriately for the type of product being offered (e.g. desktop workstations versus software packages versus service offerings). You can create your own product catalog that will help to better organize presentation of catalog items by category or type.
  • Avoid “streamlining” by creating a single multi-value variable over a table in which the individual items each require its own fulfillment (e.g. the Configuration Item table for multiple software selection). Approvals have to apply to all items selected and not related to individual items selected. From the workflow automation perspective, it becomes more difficult to manage individual approvals related to each item selected. Take a look here for some information over approval engines.
  • If the requirements of the Service Catalog Item doesn’t fit the basic shopping cart experience (i.e. Request with one or more Request Items), each with it’s own workflow process, consider extending the “task table” specifically for that purpose.

One regret I had for using the Service Catalog as the solution was a scenario where an intermediate group took a Requested Item and further defined detail requirements before passing on the request to one of a number of fulfillers depending on another (related) Catalog Item, each requiring a different workflow for fulfillment. I was successful at being able to create another Request/Requested Item and relate the two Requested Items together into one Request out of the primary workflow of the first Requested Item, but was a bit of a mess in managing the two Requested Items in tandem due to one being subordinate to the other though peers. If I had it to do over again, I would have extended the “task table” and rolled my own, possibly creating an independent Request/Requested item for fulfillment and maintain a related list to the custom task table extension.

Other Thoughts

One subject that needs mentioned is the Catalog Task. As of Fuji, Catalog Tasks are a generic task that is subordinate to the Requested Item and initiated and maintained out of the workflow for the Requested Item. There is only a single workflow (or set of workflows with conditions for execution) that can be defined generically for all Catalog Tasks as any other table (except the Catalog Item table) with no regard to any one Catalog/Requested Item. This limits how you develop around the Catalog Task to better manage fulfillment tasks between departments. I would like to see a future enhancement where a configuration table similar to the Catalog Item table that associates a named workflow that is automatically initiated that would provide a specific workflow under a Catalog Task as it would pertain to the related Request Item. As it exists today, you’d have to add a custom attribute to the Catalog Task table for a “category” and condition all workflows based on that category for execution. Since the Catalog Task is generic, more than a few conditional workflows would affect performance as well as become convoluted since its maintenance is over all Catalog Tasks.

5.2.2 - Creating and Using Subflows in ServiceNow

Subflows are separate workflow activities defined to be executed out of a primary workflow. ServiceNow’s documentation gives the basics on how to create the subflow, but lacks on how to link a subflow to a primary workflow and pass data and return codes back and forth. This article attempts to fill in the gaps you can’t get from SN documentation.

The basics for subflows is documented in this ServiceNow Wiki article.

Notes to bear in mind when defining and using subflows:

  • Create the subflow prefaced with “Subflow” in order to distinguish that subflow from a regular workflow. Otherwise you can’t tell in the list whether the workflow is a primary or sub.
  • Subflows are available as workflow activities when creating a primary workflow.
  • Subflows must be created using the same table as the primary workflow.
  • Input fields:
    • The fields must be mapped into the record for the table the subflow is created. Must exist in the same context.
    • If a literal, no quotes needed.
  • The workflow scratchpad in the primary workflow is not shared with the subflow and vice versa.
    • Data is received as “input fields”.
    • Data is returned using “return codes”.
    • If scratchpad variable is used for receiving a “return code”, it must be initialized before being referenced in the Return Code activity.
  • Return codes:
    • Can return a string (or in the form of a Javascript $() style variable).
    • Literals do not need quoting.
  • Change considerations:
    • Subflows are dynamically executed. Tasks already in process will execute the current subflow when called unless the subflow is already in execution mode (i.e. already attached to the task).
    • Workflows are statically executed. Tasks already in process will execute the legacy workflow since it is already in progress.

Example: A simple catalog item is created with a workflow associated that calls a subflow to return whether a manager’s name begins with the letter “A”. The return from the subflow would be either “yes” or “no”. This functionality would be referenced in this catalog item and others as a standard validation on other catalog items that have their own unique workflow, but requiring the same “A” validation on the manager’s name.

The system property “glide.workflow.enable_input_variables” must be set to true in order to enable input variables in subflows.

5.2.3 - Working with Dates in ServiceNow

One would think that the date/time data type would be cut and dry in terms of computing. After all, programming around date/time has been a feature since the early days of digital computer systems. ServiceNow has to provide backend utilities to perform this translation since the developer cannot reach into the web server backend. Though SN has a collection of date/time utilities, they are scant on formatting and calculation options. You have to roll your own to compensate using the Glide library functions that SN makes available to the developer for translating a system or database stored time into a constant, generic format. This article provides an example for how to create your own date/time utility utilizing the Glide library date/time functions.

Introduction

Different operating systems handles date/time using different standards. Databases stores date/time using even differing standards than the OS. Aside from storing and interpreting a constant date/time, formatting the presentation can be onerous. Remember the Y2K dilemma? In terms of open systems, date/time has been a non-event all the way around. The value stored is the number of seconds past “epoch” (i.e. January 1, 1970) in Universal Time Code (i.e. Greenwich Meantime) and converted depending on the timezone specified in the computing environment the host is running. Since this is expressed in integer form, time runs out, so to speak in January 2038. Databases have their own epoch/integer system for storing date/time. A translator to interpret a date/time is needed to present a date/time in various formats.

Use Case

On one application that I had to build, only the date itself was needed. Defining the data dictionary was no big deal since the field was definable as a “date only”. In calculating date differences in a Business Rule, I had to role my own utilities for calculating and storing differences between two dates since the Glide calls only serviced date/time. Below are some basic Javascript functional objects for working with date/time and passing the result to the client via Ajax. Some are mine, some are from various sources made available in the SN community that I have collected. The functionality is self documenting.

var AjaxUtils = Class.create();

AjaxUtils.prototype = Object.extendsObject(AbstractAjaxProcessor, {

        calcDateDiff : function(start_date, end_date) {
		// This method calculates on whole days.
		// The "start day" counts as 1 day.
		
		var createdate = new GlideDateTime();
		var enddate = new GlideDateTime();
		
		// The current day is used for the endDate calculation if not provided.
		if (typeof(start_date) == 'undefined') {
			if (this.getParameter('sysparm_start_date')) {
				start_date = this.getParameter('sysparm_start_date');
			} else {
				start_date = gs.now();
			}
		}
		if (typeof(end_date) == 'undefined') {
			if (this.getParameter('sysparm_end_date')) {
				end_date = this.getParameter('sysparm_end_date');
			} else {
				end_date = gs.now();
			}
		}
		
		createdate.setDisplayValueInternal(start_date + ' 00:00:00');
		enddate.setDisplayValueInternal(end_date + ' 00:00:00');
		
		var variance = parseInt(gs.dateDiff(createdate.getDisplayValue(), enddate.getDisplayValue()).replace(/ .*/, '') );
		variance += 1;
		
		
		return variance.toString();
		
	},
	
	checkEndDate : function(startDate, endDate) {
		if(typeof(startDate) == 'undefined') {
			//startDate = new Packages.com.glide.glideobject.GlideDateTime();
			//startDate.setDisplayValue(this.getParameter('sysparm_start_date'));
			startDate = this.getParameter('sysparm_start_date');
		}
		if(typeof(endDate) == 'undefined') {
			//endDate = new Packages.com.glide.glideobject.GlideDateTime();
			//endDate.setDisplayValue(this.getParameter('sysparm_end_date'));
			endDate = this.getParameter('sysparm_end_date');
		}
		//gs.log('here:\n' + startDate + '\n' + endDate + '\n' + gs.dateDiff(startDate, endDate, true));
		return(gs.dateDiff(startDate, endDate, true) &amp;gt;= 0);
	},
	
	returnLeadTimeInDays : function(startDate) {
		var startDateTmp = new GlideDate();
		startDateTmp.setDisplayValue(startDate);
		
		var msStart = startDateTmp.getNumericValue();
		var msNow = new GlideDate().getNumericValue();
		var msDiff = msStart - msNow;
		var dayDiff = Math.floor(msDiff / 86400000);
		
		return(dayDiff);
	},
	
	
	checkLeadTime : function(startDate, leadTime) {
		startDate = new GlideDateTime();
		startDate.setDisplayValue(this.getParameter('sysparm_start_date'));
		leadTime = parseInt(this.getParameter('sysparm_lead_time'));
		
		var msStart = startDate.getNumericValue();
		var msNow = new GlideDateTime().getNumericValue();
		var msDiff = msStart - msNow;
		var dayDiff = Math.floor(msDiff / 86400000);
		
		return(dayDiff &gt;= leadTime);
	},
	
	dateInFuture : function() {
		//Returns true if date is before now, and false if it is after now.
		var firstDT = this.getParameter('sysparm_fdt'); //Date-Time Field
		var diff = gs.dateDiff(firstDT, gs.nowDateTime(), true);
		var answer = '';
		new_diff = Math.floor(diff / 86400);
		if (new_diff &gt;= 0){
			answer = 'true';
		} else {
			answer = 'false';
		}
		return answer;
	},
	
	dateInPast : function(firstDT) {
		//Returns true if date is before today, and false if it is today or later.
		if (typeof(firstDT) == 'undefined') {
			firstDT = this.getParameter('sysparm_fdt'); //Date-Time Field
		}
		var diff = gs.dateDiff(firstDT, gs.nowDateTime(), true);
		var answer = '';
		new_diff = Math.floor(diff / 86400);
		if (new_diff &gt; 0){
			answer = 'true';
		} else {
			answer = 'false';
		}
		return answer;
	},
	
	date_timeInPast : function(firstDT) {
		//Returns true if date is before today, and false if it is today or later.
		if (typeof(firstDT) == 'undefined') {
			firstDT = this.getParameter('sysparm_fdt'); //Date-Time Field
		}
		var diff = gs.dateDiff(firstDT, gs.nowDateTime(), true);
		var answer = '';
		new_diff = Math.floor(diff / 60);
		if (new_diff &gt; 0){
			answer = 'true';
		} else {
			answer = 'false';
		}
		return answer;
	},  type: "AjaxUtils"
});

5.3 - Development Projects

5.3.1 - Recurring Maintenance

Articles here are related to recurring maintenance activities inside ServiceNow.

Introduction

I prefer to enable others appropriately to be able to get their job done. The last thing I prefer is being the bottleneck for anyone to get their job done. That is the reason why I’m passionate about IT right?

The company I was working for was on the “Dublin” release when in a short amount of time 3 groups wanted to be able to set up maintenance schedules to perform various types of routine inspections. I looked at the Planned Maintenance application for whether it could be used cooperatively by different user groups and have the capacity to partition and maintain each group’s schedule by location. In Dublin, you were constrained by targeting configuration items (CIs) and it only serviced one single user group - i.e. IT operations.

Being fascinated with how the job scheduler was constructed, I set out to see how I could create extensions off of the job scheduler for other purposes than general ServiceNow operations and be able to delegate schedule administration. Using this use case scenario and the existing monolithic scheduler as a base to extend and provide the needed delegation to operational admins. The details are shown below in the “Presentation to Houston SNUG” over how I did this.

References below are for the articles I used and a presentation I made to share using the scheduler in what we called “IT Facilities”:

5.4 - Interesting Techniques by Others

Here are some articles I found that were innovative solutions developing over ServiceNow.

5.4.1 - Dynamically Adding Variables to a Request Item

Posted on November 21, 2014 by Bill Mitlehner

http://www.snrevealed.com/2014/11/21/dynamically-adding-variables-to-a-request-item/

Whatever your reasons may be, there may come a time when you need to dynamically add an additional variable to a request item after the original request has been entered by the user. This could be for populating additional information that is not available at the time the request is made or if you are breaking up a single request that contains a list into multiple individual requests that can be operated on individually.

The challenge with doing this is that there is a relationship between four different tables that needs to be setup to ensure that the variable is associated with the request properly. At the top is the Request Item table itself (sc_req_item). Ultimately, this is where the variables will be referenced on any subsequent form displays. The other three tables create the relationship between the request item, the questions and the answers. These other tables are sc_item_option_mtom, item_option_new and sc_item_option.

Working backwards, we ultimately need to build up the relationship between the request item and answers associated with the particular request item. Capturing this relationship is the responsibility of the sc_item_option_mtom table. The sc_item_option_mtom table is a many-to-many table that has an entry that relates every single answer (sc_item_option) to its respective request item (sc_req_item). (You can review many to many relationships in the ServiceNow wiki). In addition, each entry in the Answer tables (sc_item_option) has a reference to the particular question it answers (item_option_new).

To formalize this, I’ve put together a bit code that takes a request item, variable name, variable value and (optionally) a catalog item to associate a question and answer to a particular request item.

function addOptions(reqItemID, varName, varValue, catItem)
// Get a reference to the request item table using the sys_id of the request item
var gr_item = new GlideRecord(‘sc_req_item’);
gr_item.addQuery(‘sys_id’, reqItemID);
gr_item.query();

// Assuming we found a matching request…
if (gr_item.next()) {
// Find the correct question
var gr_option = new GlideRecord(‘item_option_new’);
gr_options.addQuery(‘name’, varName);
// If the question is associated with a catalog item, keep that relationship
if (catItem != ”) gr_options.addQuery(‘cat_item’, catItem);
gr_options.query();
// If we found a matching question…
if (gr_options.next()) {
// Get a reference to the answers table and insert a new answer
var gr_answers = new GlideRecord(‘sc_item_option’);
gr_answsers.initialize();
gr_answers.item_option_new = gr_options.sys_id;  // Map the answer to its question
gr_answers.value = varValue;
gr_answers.insert();  // Insert the record
// Now build the relationship between the answer and the request item
var gr_m2m = new GlideRecord(‘sc_item_option_mtom’);
gr_m2m.initialize();
gr_m2m.sc_item_option = gr_answers.sys_id;
gr_m2m.request_item = reqItemID;
gr_m2m.insert();  // Create the new relationship
}
}
}

5.4.2 - Harnessing the Power of Dynamic Filters in ServiceNow

Author: Mark Stanger September 4th, 2015

https://servicenowguru.com/system-definition/harnessing-power-dynamic-filters-servicenow/

ServiceNow adds a ton of great functionality to each new product release. Often times, the most helpful and useful features (at least to a long-time user of the system) are enhancements to simplify or improve on existing functionality. Unfortunately, these are often some of the most under-appreciated and end up getting lost in the marketing hype of all of the brand new capabilities that you may or may not use. One such example that recently arrived to ServiceNow is ‘Dynamic filters’. In this post, I’ll share what dynamic filters are, and show how you can extend and leverage this capability to improve your own ServiceNow system.

Dynamically Filtered List

The ServiceNow wiki does a decent job of explaining the basic concept and usage of dynamic filters. There are 1000’s of scenarios where users of the system need to query for information, display a list of data, generate a report, or values from a reference field based on a specific, logical filter criteria. There are many examples of this…my Group’s work, my approvals, records assigned to other members of my groups, etc. These types of scenarios, though simple on the surface, actually require sometimes very complex code to query for and return the correct data. ServiceNow has always allowed you to do this, of course, but the approach (asking a typical end-user to remember and correctly populate a specific query string with a function call directly in the filter) really isn’t one that works well — even if that user happens to be an admin of the system. Those query strings generally look something like this and can be pasted directly into any filter criteria to return information…

javascript:gs.getUserID();
javascript:getRoledUsers();
etc...

The general idea behind dynamic filters is to allow a ServiceNow administrator to pre-define the filter query logic to return the correct information from a back-end function, then set up a dynamic filter definition to point to that information via a simple label invoked from any filter field criteria in the system. These dynamic filters can be as flexible and complex as you need them to be, but the end-user doesn’t need to know or understand any of that complexity in order to benefit from them!

There are several of these dynamic filters defined out-of-box that you can use right away as examples for creating your own. You can find them under ‘System Definition -> Dynamic Filter Options’ in your left nav. For more complex scenarios, you’ll actually point your dynamic filter to a back-end Script Include function that contains all of the logic and does the heavy lifting.

One common filter criteria that I hear about all of the time that isn’t handled out-of-box is to filter for records associated to members of my groups via some user field (usually an assignment or ownership of some sort). Tickets assigned to members of my groups, outstanding approvals for members of my groups, etc. This configuration can be added to your system by following a few simple steps as shown below…

  1. Create a new ‘On Demand’ Script Include function. I’ve written about this capability before so you can reference that article for more details if you need. Creating this script include will allow us to easily call a reusable function to return the data we want…in this case a list of users that belong to the same group as the current user. The basic idea for this function is to get a user’s groups, then find the active group members sys_id values associated with those groups and add them to an array to be returned. You can navigate to ‘System Definition -> Script Includes’ in your left nav to create this. Don’t forget that the ‘Name’ value of any on demand script include (like this one) needs to exactly match the name of the function you’re calling in the script!

‘getMyGroupMembers’ Script Include

Name: getMyGroupMembers Active: True Client callable: True Description: Queries for members of groups that the currently logged-in user is also a memer of. Script:

function getMyGroupMembers(){
    var myGroups = gs.getUser().getMyGroups();
    var groupsArray = new Array();
    var it = myGroups.iterator();
    var i=0;
    var groupMemberArray = new Array();
    while(it.hasNext()){
        var myGroup = it.next();
        //Query for group members
        var grMem = new GlideRecord('sys_user_grmember');
        grMem.addQuery('group', myGroup);
        //Only return active users
        grMem.addQuery('user.active', true);
        grMem.query();
        while(grMem.next()){
            //Add to user sys_id to array
            groupMemberArray.push(grMem.user.toString());
        }
        i++;
    }
    return groupMemberArray;
}
  1. Create a new ‘Dynamic Filter’ record

The on-demand function is great and allows you to easily return the data you want. From any place you can call scripts in the system. This is fantastic for business rules, workflow scripts, etc. but the average user running a report or filtering a list is not going to know (nor should they need to know) the exact syntax and function name to call. This is where Dynamic Filters come in! We can wrap that script call in a friendly label that displays in any filter anywhere in the system so that a normal human being can access it as well. Your dynamic filter for the script above should look just like I’ve shown in the screenshot below. You can create it by navigating to ‘System Definition -> Dynamic Filter Options’

Dynamic Filter

NOTE: One interesting bit of information I discovered while working with dynamic filters is the way that the system handles the encoded query strings for them. You end up with a query string (that you could reuse) that looks like this…

assigned_toDYNAMIC1a570fd90856c200aa4521695cf1eb24

The ‘DYNAMIC’ keyword indicates the use of a dynamic filter, and what follows is the sys_id of the corresponding dynamic filter record.

The end result is a nice, dynamic filter option for filtering where the user listed in a user field is a member of one of your groups! This is just one example of a fantastic capability in ServiceNow. There are lots of other use cases that you can add using this same approach.

Dynamically Filtered List

5.4.3 - Changing the Filter of a List Collector Variable via Client Script

Author: Mark Stanger January 13th, 2010

https://servicenowguru.com/scripting/client-scripts-scripting/changing-filter-list-collector-variable-client-script/

If you’ve worked with the Service-now.com service catalog much, you’ve probably realized that there are some differences between the service catalog interface and the traditional forms that are used throughout the rest of the tool. The intention of this is to make the experience a little bit better for end users of the system but it also means that you, as an administrator, have to learn a few new tricks to deal with those differences.

One of these differences is the List collector variable. It allows the user to select multiple items from a list of items and optionally filter those items to help in their selection. One of the most common requests I see about this type of variable deals with how to handle the filter at the top of the list collector. Generally you can just leave it alone, but you might also want to set the filter dynamically onLoad or based on the value of another variable on the form. Depending on the situation and the number of items in the list collector table you may want to remove the filter portion completely.

List Collector Filtered

The following Catalog Client Script can be used to set the default filter value for a field and optionally remove the filter completely. It assumes that your List collector variable is named ‘configuration_items’. By default it sets a filter where ‘name != null’ and ‘sys_class_name (CI type)’ is anything. Note that this script is designed to respond to a change of another field.

Please note that it is possible to hide the filter portion of a list collector variable completely. This can be accomplished by adding the ‘no_filter’ attribute to the ‘Attributes’ field on the variable form. The client script method may still be useful if you want to show/hide the filter conditionally however. This also works for Service Portal! Just make sure you set the ‘UI type’ field on the client script form to ‘Both’.

function onChange(control, oldValue, newValue, isLoading) {
    //Apply a filter to the list collector variable
    var collectorName = 'configuration_items';
    var filterString = 'name!=NULL^sys_class_nameANYTHING';
   
    //Try Service Portal method
    try{
        var myListCollector = g_list.get(collectorName);
        myListCollector.reset();
        myListCollector.setQuery(filterString);
    }
    //Revert to Service Catalog method
    catch(e){
        //Find and hide the filter header elements (optional)
        //Simple method for items with only one list collector
        //$('ep').select('.row')[0].hide();
        //Advanced method for items with more than one list collector (more prone to upgrade failure)
        //var el = $('container_' + g_form.getControl(collectorName).id).select('div.row')[0].hide();
       
        //Reset the filter query
        window[collectorName + 'g_filter'].reset();
        window[collectorName + 'g_filter'].setQuery(filterString);
        window[collectorName + 'acRequest'](null);
    }
}

Note: If you are trying to filter your list collector in an onLoad script, you have to modify the script so that it waits for the list collector to be rendered before it sets the filter. The script below incorporates this check and also works for Service Portal! Just make sure you set the ‘UI type’ field on the client script form to ‘Both’.

function onLoad() {
    //Apply a default filter to the list collector variable
    var collectorName = 'configuration_items';
    var filterString = 'name!=NULL^sys_class_nameANYTHING';
   
    //Try Service Portal method
    try{
        var myListCollector = g_list.get(collectorName);
        myListCollector.reset();
        myListCollector.setQuery(filterString);
    }
    //Revert to Service Catalog method
    catch(e){
        //Hide the list collector until we've set the filter
        g_form.setDisplay(collectorName, false);
        setCollectorFilter();
    }
   
    function setCollectorFilter(){
        //Test if the g_filter property is defined on our list collector.
        //If it hasn't rendered yet, wait 100ms and try again.
        if(typeof(window[collectorName + 'g_filter']) == 'undefined'){
            setTimeout(setCollectorFilter, 100);
            return;
        }
        //Find and hide the filter elements (optional)
        //Simple method for items with only one list collector
        //$('ep').select('.row')[0].hide();
        //Advanced method for items with more than one list collector (more prone to upgrade failure)
        //var el = $('container_' + g_form.getControl(collectorName).id).select('div.row')[0].hide();
       
        //Reset the filter query
        window[collectorName + 'g_filter'].reset();
        window[collectorName + 'g_filter'].setQuery(filterString);
        window[collectorName + 'acRequest'](null);
        //Redisplay the list collector variable
        g_form.setDisplay(collectorName, true);
    }
}