sharepoint server – Error when trying to activate the Content Organizer feature on a site

I just bumped into this in Office 365.

There is no need to force activate any hidden features.

The Content Organizer feature fails to activate due to dependencies with other features.

Ensure these features are activated and then content organizer should activate without issues.

SharePoint Server Standard Site Collection features

SharePoint Server Enterprise Site Collection features

dnd 5e – Does the Order of Scribes Feature Awakened Spellbook also change the damage type of Absorb Elements?

No, you’d still need to take the incoming hit from the specified damage types.

As the spell says you can temporarily replace the damage type with another, thereby altering the type of damage you can deal out while the spell is in effect.

But you can’t change the trigger to the spell. You’d still need to get targeted by acid, cold, fire, lightning, or thunder damage because if you weren’t you couldn’t cast the spell in the first place.

So, a wizard hits you with Fire Bolt, you can cast Absorb Elements in reaction to that, change the damage type to poison, gaining resistance to poison and do an extra 1d6 of poison damage on your next turn. You lose the resistance to the incoming attack though as it is still fire.

Objection raised: The wording says “you have resistance to the triggering damage type”.

Good point, but then you have to say the extra damage is of the triggering type too as that is what the wording says and therefore there is nothing to be gained at all. But as the ability in question lets you change the damage type I think changing the resistance and outgoing damage is allowed by the ability. The trigger itself has to stay the same as it was prior to the spell being cast.

dnd 5e – Who can use a Spell Scroll scribed by a bard who learned the spell using the Magical Secrets feature, if it is not normally on the bard spell list?

When a bard learns a spell using the Magical Secrets feature, it counts as a bard spell. Of note, should they replace any spell learned via Magical Secrets at a later level, they can only replace it with one from the bard spell list. Rules designer Jeremy Crawford actually did make a ruling on Twitter regarding this, so this is clear: replacing one’s Magical Secrets spell is easily done – it is just somewhat unwise to do so.

Say a bard picks find familiar as a spell. Massive boon! For dirt cheap, they can print them off as spell scrolls. Now, who can use that spell scroll?

I see three possibilities:

  1. This is a wizard spell on the wizard list – designed for wizards. Clearly only a wizard can use a wizard spell on the wizard list designed specifically for wizarding ways, right? So obvious: a wizard buying this spell can transcribe it into their book (with a good roll on a good day) – or simply use it and get themselves a familiar.

  2. Any Magical Secrets spell, no matter which spell list it once came from, counts as a bard spell for all intents and purposes (i.e. “learning, casting and recording”). Should a bard make such a spell scroll, any other bard can use it. A “secret” no longer! But to be clear, if it is a bard spell, only bards could use this magic item. A wizard would not ever figure it out. A druid would have no chance. A barbarian would accidentally use this scroll as a fire-starter.

  3. You can use any scroll if the spell is on your class list. So almost anyone can use that charm person scroll. Thus, the bard scribing such a spell scroll cannot even use it themselves. Imagine the ignominy with scribing any spell from Magical Secrets: “I cannot read any of what I just wrote down.”

Which interpretation is correct?

(I did search to check whether this was a repeat of a previous question before posting.)

dnd 5e – Can the Way of Mercy monk’s Flurry of Healing and Harm feature be used on one target multiple times in the same turn?

There seems to be some confusion here. I’ll try to unpack this.

Using Hand of Healing as an action.

Hand of Healing says:

As an action, you can spend 1 ki point to touch a creature and restore a number of hit points equal to a roll of your Martial Arts die + your Wisdom modifier.

If you use Hand of Healing as an action, you cannot use flurry of blows because flurry of blows says:

Immediately after you take the Attack action on your turn, you can spend 1 ki point to make two unarmed strikes as a bonus action.

Hand of Healing is not the attack action. Using Hand of Healing in this way allows for a single use of the healing.

Using Flurry of Blows

As mentioned previously, using flurry of blows as a bonus action requires that we first use our action to take the Attack action. When we do so, and use flurry of blows, Flurry of Healing and Harm says:

When you use Flurry of Blows, you can now replace each of the unarmed strikes with a use of your Hand of Healing, without spending ki points for the healing.

This gives us two instances of the healing.

We can use both instances on one creature.

There is no restriction on using the two instances of healing on the same creature. The important restriction to remember is that the affected creature must be within your reach. Hand of Healing requires that you touch the creature you are healing.

python – How can i increase number of timeseries feature when build model Encoder-Decoder model(seq2seq)

  • TensorFlow version: 2.1.0
  • Python version: 3.7.4

I could build one timeseries feature input , one time series predict using Encoder-Decoder model like below

def build_model(input_timesteps, output_timesteps, num_links):
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.BatchNormalization(name = 'batch_norm_0', input_shape = (input_timesteps, num_links, 1, 1)))
    
    #Encoder
    model.add(tf.keras.layers.ConvLSTM2D(name ='conv_lstm_1',
                         filters = num_filters, kernel_size = (kernel_size(0), 1),                       
                         padding = 'same', 
                         return_sequences = True))
    
    model.add(tf.keras.layers.Dropout(dropout_rate, name = 'dropout_1'))
    model.add(tf.keras.layers.BatchNormalization(name = 'batch_norm_1'))

    model.add(tf.keras.layers.ConvLSTM2D(name ='conv_lstm_2',
                         filters = num_filters, kernel_size = (kernel_size(1), 1), 
                         padding='same',
                         return_sequences = False))
    
    model.add(tf.keras.layers.Dropout(dropout_rate, name = 'dropout_2'))
    model.add(tf.keras.layers.BatchNormalization(name = 'batch_norm_2'))
    
    model.add(tf.keras.layers.Flatten())
    
    #Decoder
    model.add(tf.keras.layers.RepeatVector(output_timesteps))
    model.add(tf.keras.layers.Reshape((output_timesteps, num_links, 1, 64)))
    
    model.add(tf.keras.layers.ConvLSTM2D(name ='conv_lstm_3',
                         filters = num_filters, kernel_size = (kernel_size(0), 1), 
                         padding='same',
                         return_sequences = True))
    
    model.add(tf.keras.layers.Dropout(dropout_rate, name = 'dropout_3'))
    model.add(tf.keras.layers.BatchNormalization(name = 'batch_norm_3'))
    
    model.add(tf.keras.layers.ConvLSTM2D(name ='conv_lstm_4',
                         filters = num_filters, kernel_size = (kernel_size(1), 1), 
                         padding='same',
                         return_sequences = True))
    
    model.add(tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(units=1, name = 'dense_1', activation = 'relu')))
    #model.add(Dense(units=1, name = 'dense_2'))

    optimizer = tf.keras.optimizers.RMSprop() #lr=0.0001, rho=0.9, epsilon=1e-08, decay=0.9)
    model.compile(loss = "mse", optimizer = optimizer)
    return model


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
batch_norm_0 (BatchNormaliza (None, 8, 4, 1, 1)        4         
_________________________________________________________________
conv_lstm_1 (ConvLSTM2D)     (None, 8, 4, 1, 64)       166656    
_________________________________________________________________
dropout_1 (Dropout)          (None, 8, 4, 1, 64)       0         
_________________________________________________________________
batch_norm_1 (BatchNormaliza (None, 8, 4, 1, 64)       256       
_________________________________________________________________
conv_lstm_2 (ConvLSTM2D)     (None, 4, 1, 64)          164096    
_________________________________________________________________
dropout_2 (Dropout)          (None, 4, 1, 64)          0         
_________________________________________________________________
batch_norm_2 (BatchNormaliza (None, 4, 1, 64)          256       
_________________________________________________________________
flatten_2 (Flatten)          (None, 256)               0         
_________________________________________________________________
repeat_vector_2 (RepeatVecto (None, 3, 256)            0         
_________________________________________________________________
reshape_2 (Reshape)          (None, 3, 4, 1, 64)       0         
_________________________________________________________________
conv_lstm_3 (ConvLSTM2D)     (None, 3, 4, 1, 64)       327936    
_________________________________________________________________
dropout_3 (Dropout)          (None, 3, 4, 1, 64)       0         
_________________________________________________________________
batch_norm_3 (BatchNormaliza (None, 3, 4, 1, 64)       256       
_________________________________________________________________
conv_lstm_4 (ConvLSTM2D)     (None, 3, 4, 1, 64)       164096    
_________________________________________________________________
time_distributed_2 (TimeDist (None, 3, 4, 1, 1)        65        
=================================================================

About train data and test data shape.
This shape means each 8 time steps have one time series feature of length 4.
Predict is next 3 time step.

- X_train shape : (198, 8, 4, 1, 1)    X_test shape : (150, 8, 4, 1, 1)
- Y_train shape : (198, 3, 4, 1, 1)    Y_test shape : (150, 3, 4, 1, 1)

I want to increase number of timeseries feature of length 4 like below.

- multi_X_train shape : (198, 8, 2, 4, 1)   multi_X_test shape : (150, 8, 2, 4, 1)
- Y_train shape : (198, 3, 4, 1, 1)    Y_test shape : (150, 3, 4, 1, 1)
def build_multi_model(input_timesteps, output_timesteps, num_links):
    model = tf.keras.Sequential()
#     model.add(tf.keras.layers.BatchNormalization(name = 'batch_norm_0', input_shape = (input_timesteps, num_links, 2, 1)))
    model.add(tf.keras.layers.BatchNormalization(name = 'batch_norm_0', input_shape = (input_timesteps, 2, 4 ,1)))
    
    #Encoder
    model.add(tf.keras.layers.ConvLSTM2D(name ='conv_lstm_1',
                         filters = num_filters, kernel_size = (kernel_size(0), 2),                       
                         padding = 'same', 
                         return_sequences = True))
    
    model.add(tf.keras.layers.Dropout(dropout_rate, name = 'dropout_1'))
    model.add(tf.keras.layers.BatchNormalization(name = 'batch_norm_1'))

    model.add(tf.keras.layers.ConvLSTM2D(name ='conv_lstm_2',
                         filters = num_filters, kernel_size = (kernel_size(1), 2),
                         padding='same',
                         return_sequences = False))
    
    model.add(tf.keras.layers.Dropout(dropout_rate, name = 'dropout_2'))
    model.add(tf.keras.layers.BatchNormalization(name = 'batch_norm_2'))
    
    model.add(tf.keras.layers.Flatten())
    
    #Decoder
    model.add(tf.keras.layers.RepeatVector(output_timesteps))
    model.add(tf.keras.layers.Reshape((output_timesteps, num_links, 1, 64)))
    
    model.add(tf.keras.layers.ConvLSTM2D(name ='conv_lstm_3',
                         filters = num_filters, kernel_size = (kernel_size(0), 2),
                         padding='same',
                         return_sequences = True))
    
    model.add(tf.keras.layers.Dropout(dropout_rate, name = 'dropout_3'))
    model.add(tf.keras.layers.BatchNormalization(name = 'batch_norm_3'))
    
    model.add(tf.keras.layers.ConvLSTM2D(name ='conv_lstm_4',
                         filters = num_filters, kernel_size = (kernel_size(1), 2),
                         padding='same',
                         return_sequences = True))
    
    model.add(tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(units=1, name = 'dense_1', activation = 'relu')))
    #model.add(Dense(units=1, name = 'dense_2'))

    optimizer = tf.keras.optimizers.RMSprop() #lr=0.0001, rho=0.9, epsilon=1e-08, decay=0.9)
    model.compile(loss = "mse", optimizer = optimizer)
    return model

Model: "sequential_23"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
batch_norm_0 (BatchNormaliza (None, 8, 2, 4, 1)        4         
_________________________________________________________________
conv_lstm_1 (ConvLSTM2D)     (None, 8, 2, 4, 64)       333056    
_________________________________________________________________
dropout_1 (Dropout)          (None, 8, 2, 4, 64)       0         
_________________________________________________________________
batch_norm_1 (BatchNormaliza (None, 8, 2, 4, 64)       256       
_________________________________________________________________
conv_lstm_2 (ConvLSTM2D)     (None, 2, 4, 64)          327936    
_________________________________________________________________
dropout_2 (Dropout)          (None, 2, 4, 64)          0         
_________________________________________________________________
batch_norm_2 (BatchNormaliza (None, 2, 4, 64)          256       
_________________________________________________________________
flatten_22 (Flatten)         (None, 512)               0         
_________________________________________________________________
repeat_vector_22 (RepeatVect (None, 3, 512)            0         
_________________________________________________________________
reshape_22 (Reshape)         (None, 3, 4, 1, 64)       0         
_________________________________________________________________
conv_lstm_3 (ConvLSTM2D)     (None, 3, 4, 1, 64)       655616    
_________________________________________________________________
dropout_3 (Dropout)          (None, 3, 4, 1, 64)       0         
_________________________________________________________________
batch_norm_3 (BatchNormaliza (None, 3, 4, 1, 64)       256       
_________________________________________________________________
conv_lstm_4 (ConvLSTM2D)     (None, 3, 4, 1, 64)       327936    
_________________________________________________________________
time_distributed_22 (TimeDis (None, 3, 4, 1, 1)        65        
=================================================================

I could’t understand why below error occurred when i tried model fit.

model = build_multi_model(8, 3, 4)
history = model.fit(multi_X_train, Y_train,
                    batch_size = batch_size, epochs = epoch,
                    shuffle = False, validation_data = (multi_X_test, Y_test),
                    verbose = 2, callbacks = (call_back))
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-559-7b082bf34abc> in <module>
      6                     batch_size = batch_size, epochs = epoch,
      7                     shuffle = False, validation_data = (multi_X_test, Y_test),
----> 8                     verbose = 2, callbacks = (call_back))
      9 
     10 print("early_stopping")

/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    817         max_queue_size=max_queue_size,
    818         workers=workers,
--> 819         use_multiprocessing=use_multiprocessing)
    820 
    821   def evaluate(self,

/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    340                 mode=ModeKeys.TRAIN,
    341                 training_context=training_context,
--> 342                 total_epochs=epochs)
    343             cbks.make_logs(model, epoch_logs, training_result, ModeKeys.TRAIN)
    344 

/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/engine/training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)
    126         step=step, mode=mode, size=current_batch_size) as batch_logs:
    127       try:
--> 128         batch_outs = execution_function(iterator)
    129       except (StopIteration, errors.OutOfRangeError):
    130         # TODO(kaftan): File bug about tf function and errors.OutOfRangeError?

/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in execution_function(input_fn)
     96     # `numpy` translates Tensors to values in Eager mode.
     97     return nest.map_structure(_non_none_constant_value,
---> 98                               distributed_function(input_fn))
     99 
    100   return execution_function

/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds)
    566         xla_context.Exit()
    567     else:
--> 568       result = self._call(*args, **kwds)
    569 
    570     if tracing_count == self._get_tracing_count():

/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds)
    630         # Lifting succeeded, so variables are initialized and we can run the
    631         # stateless function.
--> 632         return self._stateless_fn(*args, **kwds)
    633     else:
    634       canon_args, canon_kwds = 

/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/eager/function.py in __call__(self, *args, **kwargs)
   2361     with self._lock:
   2362       graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
-> 2363     return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
   2364 
   2365   @property

/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/eager/function.py in _filtered_call(self, args, kwargs)
   1609          if isinstance(t, (ops.Tensor,
   1610                            resource_variable_ops.BaseResourceVariable))),
-> 1611         self.captured_inputs)
   1612 
   1613   def _call_flat(self, args, captured_inputs, cancellation_manager=None):

/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
   1690       # No tape is watching; skip to running the function.
   1691       return self._build_call_outputs(self._inference_function.call(
-> 1692           ctx, args, cancellation_manager=cancellation_manager))
   1693     forward_backward = self._select_forward_and_backward_functions(
   1694         args,

/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/eager/function.py in call(self, ctx, args, cancellation_manager)
    543               inputs=args,
    544               attrs=("executor_type", executor_type, "config_proto", config),
--> 545               ctx=ctx)
    546         else:
    547           outputs = execute.execute_with_cancellation(

/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     65     else:
     66       message = e.message
---> 67     six.raise_from(core._status_to_exception(e.code, message), None)
     68   except TypeError as e:
     69     keras_symbolic_tensors = (

~/.local/lib/python3.7/site-packages/six.py in raise_from(value, from_value)

InvalidArgumentError:  Input to reshape is a tensor with 98304 values, but the requested shape has 49152
     ((node sequential_23/reshape_22/Reshape (defined at <ipython-input-559-7b082bf34abc>:8) )) (Op:__inference_distributed_function_126239)

Function call stack:
distributed_function

dnd 5e – Can beasts use the Charge feature twice with Extra Attack?

Now that sidekicks can learn extra attack (Tasha’s Warrior Sidekick class) I was wondering how this works. For example if an elk charges 20ft and attacks and gets the charge bonus, can they now use the extra attack to get the charge bonus again without running since they have met the prerequisite already?

Php search script feature – Stack Overflow

Good day.
I have a php base web application on my local host, I want to add a search feature that could search the database/table, I checked other available free source codes for search, but it has it own database and so is my application as well.
May I know if I can import the databases respectively and still work fine? Or I have to do some coding, and how.
Thank you very much. :]

dnd 5e – How do the Ki-Fueled Attack Optional Feature and the Martial Arts Feature differ?

The main difference is that actions which don’t make attacks but do cost ki trigger ki-fueled strike. So if you are a way of the 4 elements monk and cast a spell for your action, you can attack as a bonus action with ki-fueled attack.

Ki-fueled strike also allows for you to make a ranged attack, so a Kensei who spends a ki point on their main attack can make a bonus action attack with their ranged weapon.

Ki-fueled strike is added to allow subclasses to make a bonus action attack in situations where they didn’t qualify for martial arts bonus action. It also allows monks to upgrade an unarmed strike to a weapon attack, if they qualify for both.

Ki-Fueled Attack Optional Feature Versus Martial Arts Feature

The Martial Arts feature allows for a bonus action unarmed strike after a main attack of an unarmed strike or a monk weapon attack.
The Ki Fueled Attack feature allows for either an unarmed strike or a monk weapon attack after spending a ki point (such as on a stunning strike) as part of the action.
Am I right in saying that the differences between the two are limited to the following?:

  1. Martial arts has to be an attack, Ki Fueled can provide for something else, such as a subclass feature.
  2. Martial arts only has an unarmed bonus attack, Ki Fueled attack can be a weapon attack.
  3. Both and/or either in any way only allow for 1 additional attack as a bonus action.

dnd 5e – How does the altered Extra Attack feature of the Bladesinger (Tasha’s Cauldron version) interact with Fighter’s additional Extra Attacks?

Either 1 Attack+1 Cantrip, or all of the attacks from Fighter.

This is the RAW ruling. If you elect to replace on of your attacks with a cantrip, then you are using the Bladesinger’s extra attack feature, which says:

You can attack twice, instead of once, whenever you take the Attack action on your turn. Moreover, you can cast one of your cantrips in place of one of those attacks.

Since this feature does not allow making more than 1 Attack along with casting a cantrip, 1 Attack+ 1 Cantrip is the limit when using the cantrip.

If you are making 2 or more attacks, then you cannot cast the cantrip, since the feature used to cast the cantrip does not permit casting the cantrip when you make 2 or more attacks. Essentially, you choose to either use the Fighter’s extra attack and make 2 or more attacks, or you use the Bladesinger’s extra attack and make 1 attack and cast 1 cantrip.

I should also mention the multiclassing rules for Extra Attack:

If you gain the Extra Attack class feature from more than one class, the features don’t add together. You can’t make more than two attacks with this feature unless it says you do (as the fighter’s version of Extra Attack does). Similarly, the warlock’s eldritch invocation Thirsting Blade doesn’t give you additional attacks if you also have Extra Attack.

It isn’t just that the attacks don’t add together, the entire features don’t add together. This further substantiates the ruling that when you have Extra Attack from multiple sources, you choose which one to use when you take the Attack action.